Feb 13 02:41:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:41:19.439998 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:41:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:41:24.439390 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:41:26 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:41:26.184235 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:41:26 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:41:26.184578 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:41:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:41:29.439418 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:41:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:41:34.439738 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:41:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:41:39.439451 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:41:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:41:44.439551 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:41:48 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:41:48.261923 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:41:48 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:41:48.261953 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:41:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:41:49.439764 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:41:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:41:54.440139 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:41:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:41:59.439395 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:04.439685 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:09.440329 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:14.440374 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:19.439553 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:19 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:42:19.946331 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:42:19 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:42:19.946349 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:42:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:24.440817 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:29.440040 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:34.440035 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:39.439774 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:40 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:42:40.787491 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:42:40 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:42:40.788100 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:42:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:44.440329 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:49.439689 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:54.440042 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:42:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:42:59.439379 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:04.439600 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:09.440022 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:14.439688 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:14 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:43:14.604292 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:43:14 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:43:14.604443 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:43:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:19.440217 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:22 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:43:22.889220 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:43:22 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:43:22.889584 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:43:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:24.439907 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:29.440897 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:34.439846 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:39.439924 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:44.440372 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:48 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:43:48.202303 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:43:48 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:43:48.202805 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:43:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:49.440085 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:54.439581 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:43:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:43:59.440215 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:04.440375 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:08 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:44:08.260716 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:44:08 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:44:08.260746 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:44:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:09.440015 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:14.439420 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:19.439582 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:24.439547 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:24 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:44:24.795784 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:44:24 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:44:24.795804 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:44:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:29.439507 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:34.439734 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:39.440367 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:44.440211 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:49.439741 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:54.439380 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:44:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:44:59.439982 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:04 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:45:04.365205 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:45:04 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:45:04.365475 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:45:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:04.440281 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:04 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:45:04.457133 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:45:04 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:45:04.457217 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:45:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:09.440045 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:14.440259 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:19.440016 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:24.440013 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:29.439928 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:34.439479 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:39.439641 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:44.439701 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:49.439718 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:49 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:45:49.639157 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:45:49 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:45:49.639176 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:45:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:54.440394 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:45:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:45:59.439735 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:01 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:46:01.040841 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:46:01 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:46:01.040866 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:46:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:04.439772 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:09.439841 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:14.439790 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:19.440557 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:24.439891 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:29.440353 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:34.440211 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:39.440433 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:44.439430 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:47 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:46:47.421785 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:46:47 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:46:47.422098 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:46:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:49.440365 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:54.440295 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:46:58 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:46:58.097507 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:46:58 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:46:58.097540 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:46:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:46:59.440073 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:04.439896 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:09.440281 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:14.439877 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:19.440145 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:24.440131 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:29.439718 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:34.439797 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:36 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:47:36.365899 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:47:36 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:47:36.366296 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:47:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:39.440144 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:44.440180 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:45 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:47:45.461489 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:47:45 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:47:45.461999 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:47:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:49.439833 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:54.439591 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:47:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:47:59.439586 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:04.439726 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:09.440281 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:09 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:48:09.963984 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:48:09 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:48:09.964008 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:48:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:14.439581 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:19.439921 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:21 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:48:21.304531 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:48:21 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:48:21.304926 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:48:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:24.440414 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:29.440479 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:34.440367 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:39.439932 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:44.440000 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:49.440197 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:54.440129 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:48:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:48:59.440224 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:04.440346 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:04 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:49:04.815083 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:49:04 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:49:04.815108 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:49:07 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:49:07.326978 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:49:07 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:49:07.327003 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:49:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:09.439647 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:14.439468 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:19.439729 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:24.439982 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:29.440230 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:34.440281 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:39.440321 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:43 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:49:43.118717 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:49:43 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:49:43.119032 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:49:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:44.439557 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:46 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:49:46.171264 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:49:46 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:49:46.171290 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:49:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:49.439674 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:54.439720 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:49:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:49:59.439379 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:04.439922 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:09.439900 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:14.440048 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:19.439433 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:23 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:50:23.616326 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:50:23 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:50:23.616351 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:50:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:24.440299 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:29.440420 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:34.439450 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:39.439914 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:44.439779 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:45 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:50:45.165508 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:50:45 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:50:45.165547 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:50:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:49.439430 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:54.440030 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:50:56 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:50:56.803949 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:50:56 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:50:56.804104 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:50:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:50:59.439512 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:04.439354 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:09.439992 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:14.439396 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:19.439442 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:24.439859 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:28 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:51:28.740457 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:51:28 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:51:28.740810 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:51:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:29.440113 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:34.440035 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:39.439634 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:41 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:51:41.238487 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:51:41 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:51:41.238519 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:51:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:44.439428 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:49.440001 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:54.440306 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:51:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:51:59.439868 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:04.440299 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:09.440119 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:14.440222 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:18 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:52:18.628553 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:52:18 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:52:18.628862 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:52:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:19.440051 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:21 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:52:21.352518 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:52:21 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:52:21.352542 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:52:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:24.440160 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:29.440007 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:34.439850 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:39.439517 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:44.439604 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:49.439382 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:54.440147 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:52:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:52:59.440118 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:04.439442 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:09.440187 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:14.440028 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:15 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:53:15.411805 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:53:15 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:53:15.411829 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:53:16 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:53:16.975566 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:53:16 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:53:16.975815 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:53:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:19.439891 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:24.440017 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:29.440230 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:34.439419 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:39.440022 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:44.439998 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:49.440084 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:54.439898 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:53:59 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:53:59.427774 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:53:59 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:53:59.427800 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:53:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:53:59.440117 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:04.439741 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:09.439689 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:12 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:54:12.425932 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:54:12 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:54:12.425955 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:54:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:14.439805 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:19.439374 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:24.440201 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:29.440035 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:34.439607 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:39.439460 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:44.440269 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:45 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:54:45.303034 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:54:45 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:54:45.303186 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:54:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:49.439450 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:54.439906 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:54:59 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:54:59.305758 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:54:59 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:54:59.306086 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:54:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:54:59.440003 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:04.440025 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:09.439429 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:14.440255 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:19.440285 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:24.440232 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:29.439831 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:34.440149 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:38 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:55:38.255960 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:55:38 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:55:38.255981 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:55:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:39.439847 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:44.439597 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:45 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:55:45.021120 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:55:45 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:55:45.021265 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:55:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:49.440066 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:54.439612 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:55:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:55:59.440057 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:04.439922 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:09.439468 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:14.440145 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:19 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:56:19.265570 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:56:19 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:56:19.265746 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:56:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:19.439572 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:21 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:56:21.827809 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:56:21 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:56:21.828115 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:56:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:24.439967 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:29.440105 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:34.439392 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:39.440008 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:44.439423 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:49.439894 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:54.439752 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:56:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:56:59.439952 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:04.440314 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:06 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:57:06.659143 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:57:06 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:57:06.659182 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:57:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:09.439831 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:12 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:57:12.917859 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:57:12 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:57:12.917882 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:57:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:14.439399 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:19.439922 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:24.440231 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:29.439454 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:34.439857 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:39.439866 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:39 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:57:39.519794 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:57:39 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:57:39.519815 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:57:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:44.439789 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:49.440081 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:54.439986 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:57:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:57:59.440230 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:00 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:58:00.529575 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:58:00 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:58:00.529609 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:58:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:04.439358 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:09.439742 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:13 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:58:13.724712 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:58:13 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:58:13.724759 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:58:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:14.440203 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:19.439923 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:24.440257 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:29.440421 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:34.439809 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:35 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:58:35.705939 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:58:35 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:58:35.706265 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:58:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:39.439994 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:44.440270 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:48 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:58:48.683804 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:58:48 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:58:48.684196 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:58:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:49.439774 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:54.439882 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:58:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:58:59.440080 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:04.439421 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:06 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:59:06.853337 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:59:06 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:59:06.853359 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 02:59:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:09.440215 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:14.440216 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:19.440410 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:23 localhost.localdomain microshift[2014]: kube-apiserver W0213 02:59:23.501096 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:59:23 localhost.localdomain microshift[2014]: kube-apiserver E0213 02:59:23.501122 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 02:59:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:24.440132 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:29.440175 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:34.440737 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:39.439726 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:44.439845 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:49.439813 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:54.439540 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 02:59:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 02:59:59.440016 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:03 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:00:03.188712 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:00:03 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:00:03.188732 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:00:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:04.439461 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:06 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:00:06.323021 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:00:06 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:00:06.323051 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:00:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:09.440243 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:14.439899 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:19.440421 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:24.440598 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:29.439734 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:34.439582 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:39.440182 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:43 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:00:43.591352 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:00:43 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:00:43.591381 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:00:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:44.440126 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:49.439449 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:54.440331 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:00:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:00:59.440146 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:04.440358 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:05 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:01:05.864401 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:01:05 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:01:05.864429 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:01:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:09.439825 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:14.440338 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:19.439764 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:24.439650 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:29.439616 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:34.439420 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:39 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:01:39.140573 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:01:39 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:01:39.140881 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:01:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:39.439873 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:44.440195 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:49.439580 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:54.439986 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:01:57 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:01:57.438439 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:01:57 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:01:57.438843 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:01:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:01:59.439594 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:04.439994 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:09.440158 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:14.440203 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:19.440023 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:24.440309 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:27 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:02:27.906772 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:02:27 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:02:27.907115 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:02:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:29.440094 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:30 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:02:30.040766 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:02:30 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:02:30.040930 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:02:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:34.439601 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:39.440155 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:44.439704 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:49.440104 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:54.440467 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:02:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:02:59.439496 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:00 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:03:00.847642 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:03:00 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:03:00.848150 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:03:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:04.439790 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:06 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:03:06.272497 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:03:06 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:03:06.272793 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:03:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:09.440263 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:14.439431 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:19.439596 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:24.440075 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:29.439761 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:34.439909 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:39.439922 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:44.440167 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:49.439981 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:54.439949 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:03:54 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:03:54.775716 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:03:54 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:03:54.775839 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:03:59 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:03:59.407949 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:03:59 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:03:59.407971 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:03:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:03:59.439393 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:04.440111 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:09.439999 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:14.439322 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:19.439336 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:24.439790 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:29.439769 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:34.439717 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:39.439408 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:44.439887 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:49.440353 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:52 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:04:52.382435 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:04:52 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:04:52.382692 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:04:54 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:04:54.042142 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:04:54 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:04:54.042174 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:04:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:54.440395 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:04:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:04:59.440100 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:04.439401 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:09.439617 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:14.439677 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:19.440306 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:24.439996 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:27 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:05:27.600169 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:05:27 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:05:27.600189 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:05:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:29.439468 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:34.440000 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:39.439720 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:44.439786 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:48 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:05:48.698778 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:05:48 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:05:48.698806 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:05:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:49.440046 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:54.439584 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:05:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:05:59.440303 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:01 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:06:01.568067 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:06:01 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:06:01.568095 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:06:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:04.439979 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:09.439853 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:14.440441 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:19.439624 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:22 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:06:22.112620 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:06:22 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:06:22.112643 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:06:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:24.440219 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:29.439903 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:34.439469 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:39.439519 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:41 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:06:41.219760 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:06:41 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:06:41.219785 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:06:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:44.440377 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:49.440201 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:54.439398 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:06:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:06:59.439950 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:04.440336 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:07 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:07:07.782350 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:07:07 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:07:07.782376 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:07:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:09.440333 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:14.439535 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:19.439571 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:23 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:07:23.187566 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:07:23 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:07:23.187891 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:07:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:24.440071 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:29.439836 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:34.439931 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:39.439781 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:44.440242 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:49.440408 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:52 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:07:52.422821 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:07:52 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:07:52.422850 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:07:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:54.439367 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:07:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:07:59.439725 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:04.440098 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:06 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:08:06.751218 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:08:06 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:08:06.751240 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:08:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:09.440426 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:14.439528 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:19.440040 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:24.440039 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:29.439548 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:34.440009 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:39.439860 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:43 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:08:43.015423 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:08:43 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:08:43.015730 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:08:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:44.440291 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:49.439830 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:50 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:08:50.455337 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:08:50 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:08:50.455362 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:08:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:54.439473 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:08:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:08:59.440236 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:04.439947 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:09.440364 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:14.440055 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:19.440030 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:24.440431 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:24 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:09:24.654781 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:09:24 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:09:24.655022 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:09:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:29.440382 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:34.440428 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:36 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:09:36.062422 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:09:36 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:09:36.062720 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:09:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:39.440211 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:44.440033 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:49.440053 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:54.439897 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:09:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:09:59.439758 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:03 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:10:03.168037 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:10:03 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:10:03.168371 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:10:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:04.440245 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:09.440243 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:14.440170 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:19.440560 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:24.440337 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:29.439562 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:33 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:10:33.471138 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:10:33 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:10:33.471158 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:10:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:34.440117 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:39.440331 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:44.439884 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:49.439611 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:54.440016 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:10:54 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:10:54.988705 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:10:54 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:10:54.988893 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:10:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:10:59.439929 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:04.439863 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:09.440222 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:14.439570 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:15 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:11:15.820825 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:11:15 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:11:15.820849 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:11:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:19.439970 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:24.440311 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:29.440153 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:34.440271 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:39.439779 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:44.439575 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:49.439444 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:53 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:11:53.967778 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:11:53 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:11:53.967946 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:11:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:54.439962 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:11:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:11:59.439397 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:04.439573 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:05 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:12:05.615221 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:12:05 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:12:05.615531 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:12:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:09.439765 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:14.440382 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:19.439415 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:24.440526 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:29.440349 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:34.439802 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:39.440284 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:44.439633 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:48 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:12:48.950031 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:12:48 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:12:48.950639 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:12:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:49.439740 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:50 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:12:50.849541 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:12:50 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:12:50.849565 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:12:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:54.439916 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:12:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:12:59.439797 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:04.439961 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:09.440138 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:14.439797 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:19.440232 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:23 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:13:23.273205 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:13:23 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:13:23.273599 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:13:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:24.439404 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:29.440003 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:34.439995 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:36 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:13:36.372432 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:13:36 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:13:36.372798 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:13:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:39.440419 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:44.439477 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:49.439920 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:54.439545 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:13:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:13:59.439872 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:04.439954 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:05 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:14:05.336687 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:14:05 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:14:05.336713 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:14:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:09.440496 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:11 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:14:11.556185 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:14:11 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:14:11.556216 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:14:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:14.439525 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:19.440355 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:24.440750 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:29.439390 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:34.439363 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:39.439613 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:44.439399 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:49.440152 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:52 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:14:52.363050 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:14:52 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:14:52.363079 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:14:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:54.440073 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:14:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:14:59.439816 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:03 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:15:03.616857 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:15:03 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:15:03.616882 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:15:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:04.440097 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:09.440219 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:14.439602 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:19.440497 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:24.439655 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:29.439942 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:34.440116 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:39.439901 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:44.440292 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:48 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:15:48.032329 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:15:48 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:15:48.032605 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:15:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:49.440121 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:54.439769 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:15:59 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:15:59.336238 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:15:59 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:15:59.336647 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:15:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:15:59.439789 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:04.439407 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:09.439561 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:14.440168 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:19.439983 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:24.440180 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:29.440082 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:34.439560 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:38 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:16:38.379911 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:16:38 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:16:38.380181 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:16:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:39.440315 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:44.439428 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:49.440148 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:52 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:16:52.621178 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:16:52 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:16:52.621359 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:16:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:54.440383 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:16:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:16:59.439552 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:04.439822 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:09.440115 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:14.439549 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:15 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:17:15.048853 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:17:15 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:17:15.048998 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:17:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:19.439825 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:23 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:17:23.807874 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:17:23 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:17:23.807900 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:17:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:24.439993 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:29.440406 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:34.440302 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:39.440416 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:44.439629 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:48 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:17:48.170392 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:17:48 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:17:48.170609 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:17:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:49.440269 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:54.439959 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:17:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:17:59.439476 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:04.439727 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:05 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:18:05.556568 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:18:05 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:18:05.556936 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:18:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:09.440273 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:14.440410 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:19.439714 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:20 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:18:20.267445 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:18:20 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:18:20.267465 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:18:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:24.439765 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:29.439503 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:34.440242 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:39.440226 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:44.439499 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:49.440473 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:50 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:18:50.694258 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:18:50 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:18:50.695032 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:18:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:54.440209 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:18:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:18:59.439879 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:03 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:19:03.296939 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:19:03 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:19:03.297347 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:19:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:04.440099 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:09.439992 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:14.439742 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:19.440000 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:24.439693 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:29.439908 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:34.439879 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:39.440314 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:44 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:19:44.393346 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:19:44 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:19:44.393786 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:19:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:44.440195 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:46 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:19:46.104105 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:19:46 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:19:46.104129 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:19:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:49.439811 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:54.439707 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:19:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:19:59.439836 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:04.440010 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:09.439359 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:14.439751 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:19.440695 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:24.440259 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:29.440109 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:31 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:20:31.664886 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:20:31 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:20:31.665348 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:20:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:34.439879 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:39.439570 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:44 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:20:44.166350 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:20:44 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:20:44.166376 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:20:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:44.439402 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:49.439646 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:54.440379 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:20:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:20:59.440305 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:04.439596 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:09.439883 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:14.439854 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:14 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:21:14.603401 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:21:14 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:21:14.603595 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:21:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:19.440195 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:24.440702 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:28 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:21:28.421157 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:21:28 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:21:28.421191 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:21:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:29.440020 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:34.439545 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:39.440002 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:44.440058 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:49.440073 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:53 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:21:53.488681 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:21:53 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:21:53.488701 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:21:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:54.439631 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:21:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:21:59.440086 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:04.439638 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:09.440256 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:11 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:22:11.810989 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:22:11 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:22:11.811011 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:22:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:14.439711 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:19.440062 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:24.440432 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:29.440037 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:34.440093 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:39.440136 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:44.439891 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:49.439586 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:50 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:22:50.077317 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:22:50 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:22:50.077451 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:22:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:54.439371 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:22:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:22:59.439508 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:04.439792 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:09.439562 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:10 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:23:10.379589 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:23:10 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:23:10.380009 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:23:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:14.439500 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:19.439648 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:24.439671 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:29.439999 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:34.440176 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:39.439728 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:44.440009 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:45 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:23:45.411838 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:23:45 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:23:45.411864 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:23:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:49.439965 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:54.439372 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:23:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:23:59.439773 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:04.440378 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:06 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:24:06.459507 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:24:06 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:24:06.459537 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:24:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:09.440097 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:14.439786 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:15 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:24:15.481594 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:24:15 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:24:15.481621 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:24:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:19.439984 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:24.440474 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:29.440272 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:34.440308 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:39.439604 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:41 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:24:41.395426 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:24:41 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:24:41.395449 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:24:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:44.440004 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:49.439577 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:54.439968 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:24:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:24:59.440011 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:04.440172 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:09.440030 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:09 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:25:09.798885 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:25:09 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:25:09.798942 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:25:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:14.440208 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:19.439946 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:24.440299 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:26 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:25:26.735880 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:25:26 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:25:26.736209 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:25:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:29.440105 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:34.440347 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:39.439938 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:44.439594 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:47 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:25:47.979116 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:25:47 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:25:47.979144 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:25:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:49.440143 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:54.440178 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:25:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:25:59.440029 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:00 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:26:00.351252 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:26:00 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:26:00.351452 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:26:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:04.440570 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:09.439876 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:14.440046 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:19.439745 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:24.439530 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:29.440172 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:33 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:26:33.988548 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:26:33 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:26:33.988998 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:26:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:34.439904 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:39.440482 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:44.440266 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:45 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:26:45.397827 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:26:45 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:26:45.397959 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:26:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:49.439766 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:54.439924 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:26:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:26:59.440320 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:04.439849 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:09.439572 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:14.440461 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:19.440075 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:23 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:27:23.232412 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:27:23 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:27:23.232451 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:27:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:24.440528 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:29.439949 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:34.439600 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:39.440296 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:43 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:27:43.828036 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:27:43 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:27:43.828360 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:27:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:44.439814 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:49.439516 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:54.439822 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:27:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:27:59.439619 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:01 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:28:01.630155 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:28:01 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:28:01.630426 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:28:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:04.439735 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:09.440039 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:14.439513 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:15 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:28:15.520259 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:28:15 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:28:15.520282 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:28:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:19.439575 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:24.439826 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:29.440268 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:34.439620 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:39.440317 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:44.440269 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:49.439845 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:53 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:28:53.935213 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:28:53 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:28:53.935616 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:28:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:54.439962 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:28:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:28:59.439795 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:04.440348 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:07 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:29:07.060020 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:29:07 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:29:07.060296 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:29:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:09.439986 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:14.439396 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:19.440126 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:24.439797 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:29.440233 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:33 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:29:33.004491 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:29:33 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:29:33.004775 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:29:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:34.440136 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:39.439631 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:44.439953 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:49.439886 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:54.440171 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:29:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:29:59.440085 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:04.440295 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:06 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:30:06.400483 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:30:06 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:30:06.400796 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:30:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:09.439440 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:14.440075 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:19.439569 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:24.439880 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:26 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:30:26.214874 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:30:26 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:30:26.214899 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:30:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:29.439739 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:34.440216 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:39.439796 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:44.439973 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:45 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:30:45.407691 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:30:45 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:30:45.407712 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:30:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:49.440235 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:54.439668 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:30:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:30:59.440331 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:04.440404 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:09.439550 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:14.439624 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:19.439825 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:24.439378 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:25 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:31:25.338998 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:31:25 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:31:25.339165 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:31:25 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:31:25.917684 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:31:25 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:31:25.917722 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:31:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:29.439640 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:34.440249 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:39.439877 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:44.439894 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:49.440032 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:54.440274 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:31:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:31:59.440303 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:04.439631 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:09.439617 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:14.439646 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:17 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:32:17.103488 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:32:17 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:32:17.103765 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:32:18 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:32:18.700213 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:32:18 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:32:18.700528 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:32:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:19.439535 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:24.439525 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:29.440135 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:34.439466 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:39.439772 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:44.440294 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:49.440258 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:54.439634 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:32:59.439711 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:32:59 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:32:59.653594 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:32:59 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:32:59.653617 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:33:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:04.439540 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:33:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:09.440177 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:33:11 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:33:11.997900 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:33:11 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:33:11.998286 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:33:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:14.440324 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:33:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:19.440496 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:33:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:24.440110 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:33:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:29.439920 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:33:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:34.440294 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:33:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:39.440462 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:33:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:44.439566 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:33:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:49.440020 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:33:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:54.439604 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:33:55 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:33:55.307964 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:33:55 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:33:55.308105 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:33:58 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:33:58.116819 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:33:58 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:33:58.117147 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:33:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:33:59.439859 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:04.439536 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:09.440149 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:14.440350 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:19.439637 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:24.439679 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:29.440134 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:29 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:34:29.481910 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:34:29 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:34:29.482045 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:34:30 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:34:30.670001 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:34:30 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:34:30.670283 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:34:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:34.439965 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:39.439539 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:44.439518 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:49.440385 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:54.439377 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:34:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:34:59.439728 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:04.439996 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:06 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:35:06.460945 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:35:06 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:35:06.460971 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:35:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:09.439941 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:14.439778 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:17 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:35:17.268349 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:35:17 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:35:17.268864 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:35:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:19.440455 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:24.439831 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:29.440035 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:34.439392 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:39.439633 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:44.439478 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:49.439566 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:50 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:35:50.776491 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:35:50 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:35:50.776782 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:35:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:54.440076 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:35:57 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:35:57.616149 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:35:57 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:35:57.616305 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:35:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:35:59.439948 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:04.439648 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:09.439966 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:14.440353 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:19.439910 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:24.440310 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:29.440235 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:30 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:36:30.391693 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:36:30 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:36:30.391831 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:36:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:34.439815 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:39.440014 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:44.439434 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:49.439646 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:54.440474 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:36:57 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:36:57.127464 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:36:57 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:36:57.127486 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:36:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:36:59.439822 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:04.440292 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:09.439847 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:10 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:37:10.020225 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:37:10 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:37:10.020370 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:37:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:14.440446 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:19.440067 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:24.440700 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:29.439568 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:34.440409 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:39.439422 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:41 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:37:41.826106 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:37:41 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:37:41.826136 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:37:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:44.439612 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:49.440506 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:54.440477 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:37:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:37:59.439676 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:04.439915 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:05 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:38:05.027630 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:38:05 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:38:05.027776 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:38:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:09.439833 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:13 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:38:13.968198 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:38:13 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:38:13.968490 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:38:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:14.440033 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:19.439998 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:24.440328 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:29.439561 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:34.440413 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:39.440421 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:41 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:38:41.687895 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:38:41 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:38:41.688408 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:38:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:44.440479 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:49.440087 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:54.440056 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:38:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:38:59.440081 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:04.440140 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:05 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:39:05.339913 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:39:05 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:39:05.339939 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:39:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:09.439903 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:14.439733 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:19.440256 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:24.439694 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:29.439413 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:34.439834 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:39.439784 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:41 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:39:41.533088 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:39:41 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:39:41.533347 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:39:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:44.440405 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:49.439948 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:54.440118 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:39:57 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:39:57.850747 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:39:57 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:39:57.850768 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:39:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:39:59.439587 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:04.439755 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:09.439946 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:11 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:40:11.723169 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:40:11 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:40:11.723481 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:40:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:14.439749 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:19.439894 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:24.440174 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:29.440427 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:34.440501 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:39.439881 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:42 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:40:42.259782 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:40:42 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:40:42.260095 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:40:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:44.440014 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:49.439783 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:54 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:40:54.132688 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:40:54 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:40:54.132722 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:40:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:54.439577 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:40:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:40:59.440304 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:04.439592 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:09.440015 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:14.440223 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:19.439776 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:24.439687 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:29.439507 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:32 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:41:32.035867 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:41:32 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:41:32.035896 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:41:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:34.440272 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:38 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:41:38.741053 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:41:38 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:41:38.741085 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:41:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:39.440214 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:44.439597 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:49.439591 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:54.439841 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:41:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:41:59.439767 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:04.439879 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:09.439910 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:14.440543 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:16 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:42:16.350357 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:42:16 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:42:16.350388 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:42:18 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:42:18.754635 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:42:18 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:42:18.754848 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:42:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:19.439644 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:24.440086 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:29.440256 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:34.440357 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:39.439972 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:44.439813 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:49.440041 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:54.439924 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:42:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:42:59.439421 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:02 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:43:02.756518 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:43:02 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:43:02.756541 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:43:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:04.440083 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:08 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:43:08.081418 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:43:08 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:43:08.081444 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:43:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:09.440257 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:14.440098 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:19.440758 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:24.440053 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:29.439985 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:34.440348 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:39.439568 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:41 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:43:41.839598 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:43:41 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:43:41.840103 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:43:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:44.439587 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:49.439425 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:54.440113 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:43:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:43:59.440406 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:04.440372 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:05 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:44:05.592156 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:44:05 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:44:05.592508 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:44:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:09.439961 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:14.439983 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:19.439755 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:24.440130 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:25 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:44:25.090293 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:44:25 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:44:25.090340 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:44:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:29.439962 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:34.439597 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:39.439638 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:44.439430 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:49.440204 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:54.440389 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:44:55 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:44:55.711597 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:44:55 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:44:55.711869 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:44:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:44:59.439586 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:04.439966 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:09.439962 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:14.440009 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:19 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:45:19.398049 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:45:19 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:45:19.398491 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:45:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:19.440019 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:24.439930 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:29.439786 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:34.440289 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:39.440256 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:44.440232 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:49.439735 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:53 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:45:53.982357 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:45:53 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:45:53.982554 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:45:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:54.440000 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:45:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:45:59.439808 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:04.440451 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:04 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:46:04.987335 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:46:04 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:46:04.987374 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:46:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:09.439897 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:14.439742 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:19.439518 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:24.439850 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:29.440347 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:31 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:46:31.418271 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:46:31 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:46:31.418295 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:46:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:34.439687 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:38 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:46:38.103432 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:46:38 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:46:38.103836 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:46:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:39.440213 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:44.439784 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:49.439917 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:54.440479 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:46:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:46:59.440103 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:04.439631 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:09.439321 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:11 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:47:11.771418 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:47:11 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:47:11.772025 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:47:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:14.440300 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:19.440328 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:24.440426 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:28 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:47:28.961467 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:47:28 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:47:28.961829 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:47:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:29.439771 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:34.440040 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:39.439634 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:44.439511 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:49.439591 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:54.440122 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:47:56 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:47:56.604182 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:47:56 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:47:56.604678 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:47:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:47:59.439681 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:04.439425 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:04 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:48:04.904554 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:48:04 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:48:04.904583 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:48:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:09.439627 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:14.439724 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:19.439874 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:24.440173 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:29.440021 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:34.439866 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:39.440276 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:40 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:48:40.246378 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:48:40 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:48:40.246404 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:48:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:44.440303 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:45 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:48:45.151982 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:48:45 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:48:45.152125 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:48:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:49.439785 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:54.439686 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:48:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:48:59.440314 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:04.440409 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:09.439973 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:14.439994 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:19.440278 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:20 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:49:20.837893 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:49:20 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:49:20.838385 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:49:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:24.439598 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:25 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:49:25.642921 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:49:25 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:49:25.642941 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:49:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:29.440040 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:34.439703 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:39.439884 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:44.440013 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:49.439885 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:54.440823 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:49:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:49:59.439431 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:04.440254 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:09.439526 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:14.440303 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:19.439524 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:20 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:50:20.782366 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:50:20 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:50:20.782637 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:50:22 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:50:22.993117 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:50:22 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:50:22.993426 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:50:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:24.439755 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:29.439552 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:34.439859 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:39.439541 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:44.439751 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:49.439897 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:54.439549 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:50:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:50:59.439704 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:01 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:51:01.621061 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:51:01 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:51:01.621092 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:51:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:04.439639 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:09.439804 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:14.439627 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:19.439769 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:19 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:51:19.579359 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:51:19 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:51:19.579384 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:51:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:24.439898 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:29.440558 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:34.440072 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:39.440366 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:44.440430 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:46 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:51:46.579057 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:51:46 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:51:46.579081 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:51:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:49.439532 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:54.439807 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:51:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:51:59.439981 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:04.440169 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:09.439942 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:14.439866 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:14 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:52:14.971326 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:52:14 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:52:14.971365 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:52:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:19.439470 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:24.439716 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:29.439882 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:34.440275 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:37 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:52:37.881623 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:52:37 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:52:37.881647 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:52:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:39.439313 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:44.440266 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:49.439734 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:52 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:52:52.338338 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:52:52 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:52:52.338364 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:52:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:54.439898 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:52:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:52:59.440160 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:04.439874 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:09.440177 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:14.439995 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:19.439982 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:20 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:53:20.433655 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:53:20 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:53:20.433827 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:53:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:24.439946 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:29.439967 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:33 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:53:33.797862 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:53:33 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:53:33.798294 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:53:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:34.439566 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:39.440299 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:44.439771 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:49.439787 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:51 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:53:51.509631 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:53:51 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:53:51.509657 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:53:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:54.440171 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:53:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:53:59.439560 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:04.440183 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:09.440054 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:14.439721 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:19.440269 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:23 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:54:23.047380 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:54:23 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:54:23.047405 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:54:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:24.439605 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:29.440222 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:34.440130 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:37 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:54:37.859904 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:54:37 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:54:37.859932 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:54:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:39.439839 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:44.439432 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:49.439957 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:54.439777 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:54:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:54:59.439713 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:04.439867 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:09.439881 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:14.439631 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:18 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:55:18.193306 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:55:18 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:55:18.193756 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:55:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:19.439705 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:19 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:55:19.764612 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:55:19 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:55:19.764642 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:55:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:24.440477 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:29.439766 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:34.439797 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:39.439797 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:44.440160 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:49.439363 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:54.439966 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:55:59.439845 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:55:59 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:55:59.761765 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:55:59 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:55:59.761801 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:56:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:04.441266 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:56:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:09.439940 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:56:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:14.440051 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:56:18 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:56:18.687145 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:56:18 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:56:18.687337 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:56:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:19.439695 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:56:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:24.440410 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:56:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:29.440139 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:56:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:34.439597 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:56:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:39.439625 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:56:41 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:56:41.467137 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:56:41 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:56:41.467160 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:56:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:44.440161 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:56:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:49.440166 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:56:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:54.439941 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:56:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:56:59.439913 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:01 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:57:01.359259 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:57:01 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:57:01.359283 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:57:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:04.440211 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:09.440296 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:13 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:57:13.226254 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:57:13 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:57:13.226293 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:57:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:14.440427 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:19.440094 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:24.439886 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:29.440118 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:34.439681 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:39.439882 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:40 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:57:40.394513 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:57:40 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:57:40.394652 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:57:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:44.439596 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:49.439966 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:54.440208 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:57:58 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:57:58.956617 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:57:58 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:57:58.957010 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:57:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:57:59.439828 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:04.439752 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:09.440168 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:14.440097 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:19.440422 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:21 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:58:21.446278 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:58:21 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:58:21.446314 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:58:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:24.440501 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:29.439830 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:34.439770 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:39.440383 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:44.440153 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:49.439825 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:54.440113 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:58:57 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:58:57.807028 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:58:57 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:58:57.807053 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:58:58 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:58:58.059916 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:58:58 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:58:58.059960 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:58:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:58:59.440390 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:04.439719 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:09.440325 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:14.439722 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:19.440156 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:24.439950 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:29.440067 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:34.439401 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:38 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:59:38.368152 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:59:38 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:59:38.368631 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 03:59:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:39.439919 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:44.439560 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:44 localhost.localdomain microshift[2014]: kube-apiserver W0213 03:59:44.595317 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:59:44 localhost.localdomain microshift[2014]: kube-apiserver E0213 03:59:44.595346 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 03:59:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:49.440259 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:54.439709 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 03:59:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 03:59:59.439939 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:04.439650 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:09.440200 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:14.440416 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:19.440008 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:19 localhost.localdomain microshift[2014]: kube-apiserver W0213 04:00:19.714144 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:00:19 localhost.localdomain microshift[2014]: kube-apiserver E0213 04:00:19.714187 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:00:21 localhost.localdomain microshift[2014]: kube-apiserver W0213 04:00:21.737081 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:00:21 localhost.localdomain microshift[2014]: kube-apiserver E0213 04:00:21.737418 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:00:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:24.440463 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:29.439902 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:34.440282 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:39.439768 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:44.439793 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:49.439901 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:54.439992 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:00:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:00:59.439736 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:04.440179 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:08 localhost.localdomain microshift[2014]: kube-apiserver W0213 04:01:08.981154 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:01:08 localhost.localdomain microshift[2014]: kube-apiserver E0213 04:01:08.981499 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:01:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:09.439527 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:14.439903 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:15 localhost.localdomain microshift[2014]: kube-apiserver W0213 04:01:15.395171 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:01:15 localhost.localdomain microshift[2014]: kube-apiserver E0213 04:01:15.395194 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:01:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:19.439801 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:24.440560 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:29.439787 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:34.440335 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:39.439741 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:44.440289 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:46 localhost.localdomain microshift[2014]: kube-apiserver W0213 04:01:46.621635 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:01:46 localhost.localdomain microshift[2014]: kube-apiserver E0213 04:01:46.621666 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:01:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:49.440359 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:54.440064 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:01:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:01:59.440388 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:04.440135 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:06 localhost.localdomain microshift[2014]: kube-apiserver W0213 04:02:06.471139 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:02:06 localhost.localdomain microshift[2014]: kube-apiserver E0213 04:02:06.471725 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:02:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:09.439831 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:14.439895 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:19.440080 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:24.440712 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:29.439636 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:34.440146 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:35 localhost.localdomain microshift[2014]: kube-apiserver W0213 04:02:35.505147 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:02:35 localhost.localdomain microshift[2014]: kube-apiserver E0213 04:02:35.505629 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:02:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:39.440023 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:44.439815 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:49.440239 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:53 localhost.localdomain microshift[2014]: kube-apiserver W0213 04:02:53.195368 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:02:53 localhost.localdomain microshift[2014]: kube-apiserver E0213 04:02:53.195640 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:02:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:54.439804 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:02:59 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:02:59.439500 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:04 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:04.439700 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:09 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:09.439569 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:12 localhost.localdomain microshift[2014]: kube-apiserver W0213 04:03:12.659705 2014 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:03:12 localhost.localdomain microshift[2014]: kube-apiserver E0213 04:03:12.659727 2014 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:03:14 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:14.439861 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:19 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:19.440137 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:24 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:24.440625 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:29 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:29.440350 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:34 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:34.439731 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:35 localhost.localdomain microshift[2014]: kube-apiserver W0213 04:03:35.850224 2014 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:03:35 localhost.localdomain microshift[2014]: kube-apiserver E0213 04:03:35.850265 2014 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:03:39 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:39.439515 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:44 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:44.440330 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:49 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:49.439586 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:54.440353 2014 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:03:54 localhost.localdomain systemd[1]: Stopping MicroShift... Feb 13 04:03:54 localhost.localdomain microshift[2014]: ??? I0213 04:03:54.481978 2014 run.go:153] Interrupt received. Stopping services Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-apiserver I0213 04:03:54.482434 2014 genericapiserver.go:506] "[graceful-termination] shutdown event" name="ShutdownInitiated" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-apiserver I0213 04:03:54.482463 2014 genericapiserver.go:978] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"openshift-kube-apiserver", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ShutdownInitiated' Received signal to terminate, becoming unready, but keeping serving Feb 13 04:03:54 localhost.localdomain microshift[2014]: sysconfwatch-controller I0213 04:03:54.482711 2014 manager.go:119] sysconfwatch-controller completed Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.483593 2014 secure_serving.go:255] Stopped listening on 127.0.0.1:10257 Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.483679 2014 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-scheduler I0213 04:03:54.483923 2014 secure_serving.go:255] Stopped listening on [::]:10259 Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-scheduler I0213 04:03:54.483978 2014 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" Feb 13 04:03:54 localhost.localdomain microshift[2014]: route-controller-manager I0213 04:03:54.484204 2014 ingress.go:325] Shutting down controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-apiserver I0213 04:03:54.484512 2014 controller.go:211] Shutting down kubernetes service endpoint reconciler Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.484938 2014 ttlafterfinished_controller.go:116] Shutting down TTL after finished controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.484955 2014 pv_controller_base.go:334] Shutting down persistent volume controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.484973 2014 pvc_protection_controller.go:111] "Shutting down PVC protection controller" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.484987 2014 disruption.go:447] Shutting down disruption controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485006 2014 replica_set.go:213] Shutting down replicationcontroller controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485025 2014 gc_controller.go:113] Shutting down GC controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485037 2014 endpoints_controller.go:195] Shutting down endpoint controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485055 2014 endpointslice_controller.go:269] Shutting down endpoint slice controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485073 2014 expand_controller.go:352] Shutting down expand controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485165 2014 range_allocator.go:183] Shutting down range CIDR allocator Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485182 2014 node_ipam_controller.go:169] Shutting down ipam controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485196 2014 namespace_controller.go:207] Shutting down namespace controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485211 2014 attach_detach_controller.go:367] Shutting down attach detach controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485225 2014 pv_protection_controller.go:87] Shutting down PV protection controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485237 2014 garbagecollector.go:172] Shutting down garbage collector controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485251 2014 endpointslicemirroring_controller.go:224] Shutting down EndpointSliceMirroring controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485267 2014 controller.go:181] Shutting down ephemeral volume controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485280 2014 serviceaccounts_controller.go:123] Shutting down service account controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485292 2014 publisher.go:92] Shutting down service CA certificate configmap publisher Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485352 2014 node_lifecycle_controller.go:581] Shutting down node controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485373 2014 replica_set.go:213] Shutting down replicaset controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485391 2014 resource_quota_controller.go:300] Shutting down resource quota controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485536 2014 pv_controller_base.go:602] claim worker queue shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.485945 2014 publisher.go:113] Shutting down root CA certificate configmap publisher Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486079 2014 resource_quota_controller.go:264] resource quota controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486092 2014 resource_quota_controller.go:264] resource quota controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486099 2014 resource_quota_controller.go:264] resource quota controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486105 2014 resource_quota_controller.go:264] resource quota controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486112 2014 resource_quota_controller.go:264] resource quota controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486121 2014 resource_quota_controller.go:264] resource quota controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486127 2014 resource_quota_controller.go:264] resource quota controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486133 2014 resource_quota_controller.go:264] resource quota controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486140 2014 resource_quota_controller.go:264] resource quota controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486148 2014 pv_controller_base.go:545] volume worker queue shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486175 2014 graph_builder.go:319] stopped 50 of 50 monitors Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486180 2014 graph_builder.go:320] GraphBuilder stopping Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486373 2014 certificate_controller.go:124] Shutting down certificate controller "csrapproving" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486392 2014 cronjob_controllerv2.go:149] "Shutting down cronjob controller v2" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486410 2014 certificate_controller.go:124] Shutting down certificate controller "csrsigning-kubelet-serving" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486427 2014 certificate_controller.go:124] Shutting down certificate controller "csrsigning-kube-apiserver-client" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486438 2014 certificate_controller.go:124] Shutting down certificate controller "csrsigning-legacy-unknown" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486449 2014 certificate_controller.go:124] Shutting down certificate controller "csrsigning-kubelet-client" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486463 2014 stateful_set.go:164] Shutting down statefulset controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486495 2014 horizontal.go:193] Shutting down HPA controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486775 2014 job_controller.go:205] Shutting down job controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486797 2014 clusterroleaggregation_controller.go:200] Shutting down ClusterRoleAggregator Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486813 2014 deployment_controller.go:166] "Shutting down controller" controller="deployment" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486830 2014 daemon_controller.go:290] Shutting down daemon sets controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486869 2014 tokens_controller.go:189] Shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486883 2014 cleaner.go:90] Shutting down CSR cleaner controller Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486945 2014 dynamic_serving_content.go:146] "Shutting down controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.486992 2014 dynamic_serving_content.go:146] "Shutting down controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.487038 2014 dynamic_serving_content.go:146] "Shutting down controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.487208 2014 dynamic_serving_content.go:146] "Shutting down controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.487367 2014 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.487385 2014 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.487396 2014 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController Feb 13 04:03:54 localhost.localdomain microshift[2014]: kubelet I0213 04:03:54.487809 2014 manager.go:119] kubelet completed Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.487915 2014 horizontal.go:242] horizontal pod autoscaler controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.487919 2014 horizontal.go:242] horizontal pod autoscaler controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.487923 2014 horizontal.go:242] horizontal pod autoscaler controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.487926 2014 horizontal.go:242] horizontal pod autoscaler controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.487928 2014 horizontal.go:242] horizontal pod autoscaler controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.488015 2014 resource_quota_monitor.go:329] QuotaMonitor stopped 28 of 28 monitors Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.490770 2014 resource_quota_monitor.go:330] QuotaMonitor stopping Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.488026 2014 manager.go:119] kube-controller-manager completed Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.488058 2014 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/kubernetes/kube-controller-manager.crt::/var/run/kubernetes/kube-controller-manager.key" Feb 13 04:03:54 localhost.localdomain microshift[2014]: cluster-policy-controller I0213 04:03:54.490817 2014 base_controller.go:114] Shutting down worker of namespace-security-allocation-controller controller ... Feb 13 04:03:54 localhost.localdomain microshift[2014]: cluster-policy-controller I0213 04:03:54.490828 2014 base_controller.go:114] Shutting down worker of pod-security-admission-label-synchronization-controller controller ... Feb 13 04:03:54 localhost.localdomain microshift[2014]: cluster-policy-controller I0213 04:03:54.490836 2014 base_controller.go:114] Shutting down worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-scheduler E0213 04:03:54.488141 2014 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-scheduler E0213 04:03:54.488156 2014 manager.go:116] service kube-scheduler exited with error: finished without leader elect, stopping MicroShift Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-scheduler I0213 04:03:54.488250 2014 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-scheduler I0213 04:03:54.488255 2014 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-scheduler I0213 04:03:54.488259 2014 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController Feb 13 04:03:54 localhost.localdomain microshift[2014]: kubelet I0213 04:03:54.488720 2014 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/kubelet-ca.crt" Feb 13 04:03:54 localhost.localdomain microshift[2014]: cluster-policy-controller I0213 04:03:54.488735 2014 base_controller.go:167] Shutting down namespace-security-allocation-controller ... Feb 13 04:03:54 localhost.localdomain microshift[2014]: cluster-policy-controller I0213 04:03:54.491758 2014 base_controller.go:104] All namespace-security-allocation-controller workers have been terminated Feb 13 04:03:54 localhost.localdomain microshift[2014]: cluster-policy-controller I0213 04:03:54.488747 2014 base_controller.go:167] Shutting down pod-security-admission-label-synchronization-controller ... Feb 13 04:03:54 localhost.localdomain microshift[2014]: cluster-policy-controller I0213 04:03:54.491854 2014 base_controller.go:104] All pod-security-admission-label-synchronization-controller workers have been terminated Feb 13 04:03:54 localhost.localdomain microshift[2014]: cluster-policy-controller I0213 04:03:54.488757 2014 base_controller.go:167] Shutting down WebhookAuthenticatorCertApprover_csr-approver-controller ... Feb 13 04:03:54 localhost.localdomain microshift[2014]: cluster-policy-controller I0213 04:03:54.491864 2014 base_controller.go:104] All WebhookAuthenticatorCertApprover_csr-approver-controller workers have been terminated Feb 13 04:03:54 localhost.localdomain microshift[2014]: microshift-mdns-controller I0213 04:03:54.488770 2014 manager.go:119] microshift-mdns-controller completed Feb 13 04:03:54 localhost.localdomain microshift[2014]: kube-controller-manager I0213 04:03:54.488831 2014 resource_quota_controller.go:264] resource quota controller worker shutting down Feb 13 04:03:54 localhost.localdomain microshift[2014]: cluster-policy-controller I0213 04:03:54.488979 2014 manager.go:119] cluster-policy-controller completed Feb 13 04:03:54 localhost.localdomain microshift[2014]: ??? I0213 04:03:54.495212 2014 run.go:159] Another interrupt received. Force terminating services Feb 13 04:03:54 localhost.localdomain microshift[2014]: ??? I0213 04:03:54.495231 2014 run.go:163] MicroShift stopped Feb 13 04:03:54 localhost.localdomain systemd[1]: microshift.service: Succeeded. Feb 13 04:03:54 localhost.localdomain systemd[1]: Stopped MicroShift. Feb 13 04:03:54 localhost.localdomain systemd[1]: microshift.service: Consumed 3min 1.900s CPU time Feb 13 04:05:12 localhost.localdomain systemd[1]: Starting MicroShift... Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? I0213 04:05:13.284384 132400 run.go:115] Starting MicroShift Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.284998 132400 certchains.go:122] [admin-kubeconfig-signer] rotate at: 2032-02-16 06:24:30 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285034 132400 certchains.go:122] [admin-kubeconfig-signer admin-kubeconfig-client] rotate at: 2032-02-16 06:24:30 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285058 132400 certchains.go:122] [aggregator-signer] rotate at: 2023-10-16 06:24:31 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285081 132400 certchains.go:122] [aggregator-signer aggregator-client] rotate at: 2023-10-16 09:05:13 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285101 132400 certchains.go:122] [etcd-signer] rotate at: 2032-02-16 06:24:33 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285121 132400 certchains.go:122] [etcd-signer apiserver-etcd-client] rotate at: 2032-02-16 06:24:33 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285147 132400 certchains.go:122] [ingress-ca] rotate at: 2032-02-16 06:24:31 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285166 132400 certchains.go:122] [ingress-ca router-default-serving] rotate at: 2023-10-16 06:24:31 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285186 132400 certchains.go:122] [kube-apiserver-external-signer] rotate at: 2032-02-16 06:24:31 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285207 132400 certchains.go:122] [kube-apiserver-external-signer kube-external-serving] rotate at: 2023-10-16 06:24:31 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285226 132400 certchains.go:122] [kube-apiserver-localhost-signer] rotate at: 2032-02-16 06:24:31 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285246 132400 certchains.go:122] [kube-apiserver-localhost-signer kube-apiserver-localhost-serving] rotate at: 2023-10-16 06:24:32 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285265 132400 certchains.go:122] [kube-apiserver-service-network-signer] rotate at: 2032-02-16 06:24:33 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285285 132400 certchains.go:122] [kube-apiserver-service-network-signer kube-apiserver-service-network-serving] rotate at: 2023-10-16 06:24:33 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285305 132400 certchains.go:122] [kube-apiserver-to-kubelet-signer] rotate at: 2023-10-16 06:24:30 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285326 132400 certchains.go:122] [kube-apiserver-to-kubelet-signer kube-apiserver-to-kubelet-client] rotate at: 2023-10-16 06:24:30 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285345 132400 certchains.go:122] [kube-control-plane-signer] rotate at: 2023-10-16 06:24:29 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285366 132400 certchains.go:122] [kube-control-plane-signer cluster-policy-controller] rotate at: 2023-10-16 09:05:13 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285385 132400 certchains.go:122] [kube-control-plane-signer kube-controller-manager] rotate at: 2023-10-16 09:05:12 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285406 132400 certchains.go:122] [kube-control-plane-signer kube-scheduler] rotate at: 2023-10-16 09:05:12 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285425 132400 certchains.go:122] [kube-control-plane-signer route-controller-manager] rotate at: 2023-10-16 06:24:30 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285445 132400 certchains.go:122] [kubelet-signer] rotate at: 2023-10-16 06:24:30 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285464 132400 certchains.go:122] [kubelet-signer kube-csr-signer] rotate at: 2023-10-16 06:24:30 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285486 132400 certchains.go:122] [kubelet-signer kube-csr-signer kubelet-client] rotate at: 2023-10-16 06:24:30 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285504 132400 certchains.go:122] [kubelet-signer kube-csr-signer kubelet-server] rotate at: 2023-10-16 06:24:30 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285524 132400 certchains.go:122] [service-ca] rotate at: 2032-02-16 06:24:31 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? E0213 04:05:13.285545 132400 certchains.go:122] [service-ca route-controller-manager-serving] rotate at: 2023-10-16 06:24:31 +0000 UTC Feb 13 04:05:13 localhost.localdomain microshift[132400]: ??? I0213 04:05:13.285753 132400 run.go:126] Started service-manager Feb 13 04:05:13 localhost.localdomain microshift[132400]: etcd I0213 04:05:13.285783 132400 manager.go:114] Starting etcd Feb 13 04:05:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:13.286006 132400 manager.go:114] Starting sysconfwatch-controller Feb 13 04:05:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:13.286016 132400 sysconfwatch_linux.go:89] starting sysconfwatch-controller with IP address "192.168.122.17" Feb 13 04:05:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:13.286021 132400 sysconfwatch_linux.go:95] sysconfwatch-controller is ready Feb 13 04:05:13 localhost.localdomain microshift[132400]: etcd W0213 04:05:13.286801 132400 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to { Feb 13 04:05:13 localhost.localdomain microshift[132400]: "Addr": "127.0.0.1:2379", Feb 13 04:05:13 localhost.localdomain microshift[132400]: "ServerName": "127.0.0.1", Feb 13 04:05:13 localhost.localdomain microshift[132400]: "Attributes": null, Feb 13 04:05:13 localhost.localdomain microshift[132400]: "BalancerAttributes": null, Feb 13 04:05:13 localhost.localdomain microshift[132400]: "Type": 0, Feb 13 04:05:13 localhost.localdomain microshift[132400]: "Metadata": null Feb 13 04:05:13 localhost.localdomain microshift[132400]: }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused" Feb 13 04:05:16 localhost.localdomain microshift[132400]: etcd I0213 04:05:16.289582 132400 etcd.go:111] etcd is ready! Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.289694 132400 manager.go:114] Starting kube-apiserver Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.290643 132400 kube-apiserver.go:325] "kube-apiserver" not yet ready: Get "https://127.0.0.1:6443/readyz": dial tcp 127.0.0.1:6443: connect: connection refused Feb 13 04:05:16 localhost.localdomain microshift[132400]: Flag --openshift-config has been deprecated, to be removed Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291524 132400 flags.go:64] FLAG: --admission-control="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291532 132400 flags.go:64] FLAG: --admission-control-config-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291541 132400 flags.go:64] FLAG: --advertise-address="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291546 132400 flags.go:64] FLAG: --aggregator-reject-forwarding-redirect="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291551 132400 flags.go:64] FLAG: --allow-metric-labels="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291573 132400 flags.go:64] FLAG: --allow-privileged="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291577 132400 flags.go:64] FLAG: --anonymous-auth="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291580 132400 flags.go:64] FLAG: --api-audiences="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291584 132400 flags.go:64] FLAG: --apiserver-count="1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291590 132400 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291594 132400 flags.go:64] FLAG: --audit-log-batch-max-size="1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291598 132400 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291609 132400 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291612 132400 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291616 132400 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291624 132400 flags.go:64] FLAG: --audit-log-compress="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291627 132400 flags.go:64] FLAG: --audit-log-format="json" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291630 132400 flags.go:64] FLAG: --audit-log-maxage="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291633 132400 flags.go:64] FLAG: --audit-log-maxbackup="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291636 132400 flags.go:64] FLAG: --audit-log-maxsize="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291639 132400 flags.go:64] FLAG: --audit-log-mode="blocking" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291643 132400 flags.go:64] FLAG: --audit-log-path="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291646 132400 flags.go:64] FLAG: --audit-log-truncate-enabled="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291651 132400 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291656 132400 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291669 132400 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291673 132400 flags.go:64] FLAG: --audit-policy-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291676 132400 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291679 132400 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291683 132400 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291689 132400 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291693 132400 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291696 132400 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291699 132400 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291702 132400 flags.go:64] FLAG: --audit-webhook-config-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291706 132400 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291711 132400 flags.go:64] FLAG: --audit-webhook-mode="batch" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291714 132400 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291717 132400 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291721 132400 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291724 132400 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291727 132400 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291731 132400 flags.go:64] FLAG: --authentication-token-webhook-config-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291734 132400 flags.go:64] FLAG: --authentication-token-webhook-version="v1beta1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291739 132400 flags.go:64] FLAG: --authorization-mode="[AlwaysAllow]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291744 132400 flags.go:64] FLAG: --authorization-policy-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291748 132400 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291752 132400 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291755 132400 flags.go:64] FLAG: --authorization-webhook-config-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291758 132400 flags.go:64] FLAG: --authorization-webhook-version="v1beta1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291761 132400 flags.go:64] FLAG: --bind-address="0.0.0.0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291767 132400 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291770 132400 flags.go:64] FLAG: --client-ca-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291773 132400 flags.go:64] FLAG: --cloud-config="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291776 132400 flags.go:64] FLAG: --cloud-provider="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291779 132400 flags.go:64] FLAG: --contention-profiling="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291783 132400 flags.go:64] FLAG: --cors-allowed-origins="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291786 132400 flags.go:64] FLAG: --default-not-ready-toleration-seconds="300" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291793 132400 flags.go:64] FLAG: --default-unreachable-toleration-seconds="300" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291797 132400 flags.go:64] FLAG: --default-watch-cache-size="100" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291800 132400 flags.go:64] FLAG: --delete-collection-workers="1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291803 132400 flags.go:64] FLAG: --disable-admission-plugins="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291808 132400 flags.go:64] FLAG: --disabled-metrics="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291811 132400 flags.go:64] FLAG: --egress-selector-config-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291814 132400 flags.go:64] FLAG: --enable-admission-plugins="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291820 132400 flags.go:64] FLAG: --enable-aggregator-routing="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291824 132400 flags.go:64] FLAG: --enable-bootstrap-token-auth="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291827 132400 flags.go:64] FLAG: --enable-garbage-collector="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291830 132400 flags.go:64] FLAG: --enable-logs-handler="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291833 132400 flags.go:64] FLAG: --enable-priority-and-fairness="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291836 132400 flags.go:64] FLAG: --encryption-provider-config="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291839 132400 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291845 132400 flags.go:64] FLAG: --endpoint-reconciler-type="lease" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291848 132400 flags.go:64] FLAG: --etcd-cafile="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291851 132400 flags.go:64] FLAG: --etcd-certfile="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291854 132400 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291858 132400 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291861 132400 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291864 132400 flags.go:64] FLAG: --etcd-healthcheck-timeout="2s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291870 132400 flags.go:64] FLAG: --etcd-keyfile="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291873 132400 flags.go:64] FLAG: --etcd-prefix="/registry" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291876 132400 flags.go:64] FLAG: --etcd-readycheck-timeout="2s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291879 132400 flags.go:64] FLAG: --etcd-servers="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291883 132400 flags.go:64] FLAG: --etcd-servers-overrides="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291887 132400 flags.go:64] FLAG: --event-ttl="1h0m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291890 132400 flags.go:64] FLAG: --external-hostname="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291895 132400 flags.go:64] FLAG: --feature-gates="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291900 132400 flags.go:64] FLAG: --goaway-chance="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291905 132400 flags.go:64] FLAG: --help="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291909 132400 flags.go:64] FLAG: --http2-max-streams-per-connection="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291912 132400 flags.go:64] FLAG: --kubelet-certificate-authority="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291915 132400 flags.go:64] FLAG: --kubelet-client-certificate="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291918 132400 flags.go:64] FLAG: --kubelet-client-key="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291923 132400 flags.go:64] FLAG: --kubelet-port="10250" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291928 132400 flags.go:64] FLAG: --kubelet-preferred-address-types="[Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291933 132400 flags.go:64] FLAG: --kubelet-read-only-port="10255" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291936 132400 flags.go:64] FLAG: --kubelet-timeout="5s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291940 132400 flags.go:64] FLAG: --kubernetes-service-node-port="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291943 132400 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291948 132400 flags.go:64] FLAG: --livez-grace-period="0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291951 132400 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291954 132400 flags.go:64] FLAG: --logging-format="text" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291957 132400 flags.go:64] FLAG: --master-service-namespace="default" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291961 132400 flags.go:64] FLAG: --max-connection-bytes-per-sec="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291964 132400 flags.go:64] FLAG: --max-mutating-requests-inflight="200" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291967 132400 flags.go:64] FLAG: --max-requests-inflight="400" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291970 132400 flags.go:64] FLAG: --min-request-timeout="1800" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291976 132400 flags.go:64] FLAG: --oidc-ca-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291980 132400 flags.go:64] FLAG: --oidc-client-id="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291983 132400 flags.go:64] FLAG: --oidc-groups-claim="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291987 132400 flags.go:64] FLAG: --oidc-groups-prefix="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291990 132400 flags.go:64] FLAG: --oidc-issuer-url="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291993 132400 flags.go:64] FLAG: --oidc-required-claim="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.291999 132400 flags.go:64] FLAG: --oidc-signing-algs="[RS256]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292004 132400 flags.go:64] FLAG: --oidc-username-claim="sub" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292007 132400 flags.go:64] FLAG: --oidc-username-prefix="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292009 132400 flags.go:64] FLAG: --openshift-config="/tmp/kube-apiserver-config-1559160234.yaml" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292013 132400 flags.go:64] FLAG: --permit-address-sharing="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292016 132400 flags.go:64] FLAG: --permit-port-sharing="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292019 132400 flags.go:64] FLAG: --profiling="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292022 132400 flags.go:64] FLAG: --proxy-client-cert-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292026 132400 flags.go:64] FLAG: --proxy-client-key-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292029 132400 flags.go:64] FLAG: --request-timeout="1m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292033 132400 flags.go:64] FLAG: --requestheader-allowed-names="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292036 132400 flags.go:64] FLAG: --requestheader-client-ca-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292039 132400 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292044 132400 flags.go:64] FLAG: --requestheader-group-headers="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292084 132400 flags.go:64] FLAG: --requestheader-username-headers="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292090 132400 flags.go:64] FLAG: --runtime-config="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292094 132400 flags.go:64] FLAG: --secure-port="6443" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292097 132400 flags.go:64] FLAG: --send-retry-after-while-not-ready-once="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292100 132400 flags.go:64] FLAG: --service-account-extend-token-expiration="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292103 132400 flags.go:64] FLAG: --service-account-issuer="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292118 132400 flags.go:64] FLAG: --service-account-jwks-uri="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292127 132400 flags.go:64] FLAG: --service-account-key-file="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292179 132400 flags.go:64] FLAG: --service-account-lookup="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292182 132400 flags.go:64] FLAG: --service-account-max-token-expiration="0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292185 132400 flags.go:64] FLAG: --service-account-signing-key-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292189 132400 flags.go:64] FLAG: --service-cluster-ip-range="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292192 132400 flags.go:64] FLAG: --service-node-port-range="30000-32767" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292198 132400 flags.go:64] FLAG: --show-hidden-metrics-for-version="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292203 132400 flags.go:64] FLAG: --shutdown-delay-duration="0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292207 132400 flags.go:64] FLAG: --shutdown-send-retry-after="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292210 132400 flags.go:64] FLAG: --storage-backend="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292213 132400 flags.go:64] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292216 132400 flags.go:64] FLAG: --strict-transport-security-directives="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292220 132400 flags.go:64] FLAG: --tls-cert-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292223 132400 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292229 132400 flags.go:64] FLAG: --tls-min-version="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292232 132400 flags.go:64] FLAG: --tls-private-key-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292235 132400 flags.go:64] FLAG: --tls-sni-cert-key="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292239 132400 flags.go:64] FLAG: --token-auth-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292242 132400 flags.go:64] FLAG: --tracing-config-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292245 132400 flags.go:64] FLAG: --v="2" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292252 132400 flags.go:64] FLAG: --version="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292258 132400 flags.go:64] FLAG: --vmodule="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292262 132400 flags.go:64] FLAG: --watch-cache="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292265 132400 flags.go:64] FLAG: --watch-cache-sizes="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292300 132400 plugins.go:84] "Registered admission plugin" plugin="authorization.openshift.io/RestrictSubjectBindings" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292309 132400 plugins.go:84] "Registered admission plugin" plugin="route.openshift.io/RouteHostAssignment" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292315 132400 plugins.go:84] "Registered admission plugin" plugin="image.openshift.io/ImagePolicy" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292320 132400 plugins.go:84] "Registered admission plugin" plugin="route.openshift.io/IngressAdmission" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292328 132400 plugins.go:84] "Registered admission plugin" plugin="autoscaling.openshift.io/ManagementCPUsOverride" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292334 132400 plugins.go:84] "Registered admission plugin" plugin="scheduling.openshift.io/OriginPodNodeEnvironment" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292340 132400 plugins.go:84] "Registered admission plugin" plugin="autoscaling.openshift.io/ClusterResourceOverride" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292345 132400 plugins.go:84] "Registered admission plugin" plugin="quota.openshift.io/ClusterResourceQuota" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292352 132400 plugins.go:84] "Registered admission plugin" plugin="autoscaling.openshift.io/RunOnceDuration" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292362 132400 plugins.go:84] "Registered admission plugin" plugin="scheduling.openshift.io/PodNodeConstraints" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292368 132400 plugins.go:84] "Registered admission plugin" plugin="security.openshift.io/SecurityContextConstraint" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292376 132400 plugins.go:84] "Registered admission plugin" plugin="security.openshift.io/SCCExecRestrictions" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292381 132400 plugins.go:84] "Registered admission plugin" plugin="network.openshift.io/ExternalIPRanger" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292387 132400 plugins.go:84] "Registered admission plugin" plugin="network.openshift.io/RestrictedEndpointsAdmission" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292392 132400 plugins.go:84] "Registered admission plugin" plugin="storage.openshift.io/CSIInlineVolumeSecurity" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292398 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/ValidateAPIServer" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292403 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/ValidateAuthentication" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292408 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/ValidateFeatureGate" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292418 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/ValidateConsole" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292429 132400 plugins.go:84] "Registered admission plugin" plugin="operator.openshift.io/ValidateDNS" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292435 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/ValidateImage" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292442 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/ValidateOAuth" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292449 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/ValidateProject" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292454 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/DenyDeleteClusterConfiguration" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292459 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/ValidateScheduler" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292465 132400 plugins.go:84] "Registered admission plugin" plugin="operator.openshift.io/ValidateKubeControllerManager" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292470 132400 plugins.go:84] "Registered admission plugin" plugin="quota.openshift.io/ValidateClusterResourceQuota" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292476 132400 plugins.go:84] "Registered admission plugin" plugin="security.openshift.io/ValidateSecurityContextConstraints" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292482 132400 plugins.go:84] "Registered admission plugin" plugin="authorization.openshift.io/ValidateRoleBindingRestriction" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292489 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/ValidateNetwork" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292494 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/ValidateAPIRequestCount" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292500 132400 plugins.go:84] "Registered admission plugin" plugin="config.openshift.io/RestrictExtremeWorkerLatencyProfile" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292505 132400 plugins.go:84] "Registered admission plugin" plugin="security.openshift.io/DefaultSecurityContextConstraints" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292510 132400 plugins.go:84] "Registered admission plugin" plugin="route.openshift.io/ValidateRoute" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.292515 132400 plugins.go:84] "Registered admission plugin" plugin="route.openshift.io/DefaultRoute" Feb 13 04:05:16 localhost.localdomain microshift[132400]: Flag --openshift-config has been deprecated, to be removed Feb 13 04:05:16 localhost.localdomain microshift[132400]: Flag --enable-logs-handler has been deprecated, This flag will be removed in v1.19 Feb 13 04:05:16 localhost.localdomain microshift[132400]: Flag --kubelet-read-only-port has been deprecated, kubelet-read-only-port is deprecated and will be removed. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297761 132400 flags.go:64] FLAG: --admission-control="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297768 132400 flags.go:64] FLAG: --admission-control-config-file="/tmp/kubeapiserver-admission-config.yaml4185655674" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297772 132400 flags.go:64] FLAG: --advertise-address="192.168.122.17" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297776 132400 flags.go:64] FLAG: --aggregator-reject-forwarding-redirect="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297779 132400 flags.go:64] FLAG: --allow-metric-labels="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297783 132400 flags.go:64] FLAG: --allow-privileged="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297786 132400 flags.go:64] FLAG: --anonymous-auth="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297788 132400 flags.go:64] FLAG: --api-audiences="[https://kubernetes.default.svc]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297793 132400 flags.go:64] FLAG: --apiserver-count="1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297796 132400 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297799 132400 flags.go:64] FLAG: --audit-log-batch-max-size="1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297801 132400 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297804 132400 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297807 132400 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297810 132400 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297813 132400 flags.go:64] FLAG: --audit-log-compress="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297816 132400 flags.go:64] FLAG: --audit-log-format="json" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297818 132400 flags.go:64] FLAG: --audit-log-maxage="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297821 132400 flags.go:64] FLAG: --audit-log-maxbackup="10" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297824 132400 flags.go:64] FLAG: --audit-log-maxsize="200" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297826 132400 flags.go:64] FLAG: --audit-log-mode="blocking" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297830 132400 flags.go:64] FLAG: --audit-log-path="/var/log/kube-apiserver/audit.log" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297833 132400 flags.go:64] FLAG: --audit-log-truncate-enabled="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297836 132400 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297840 132400 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297843 132400 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297846 132400 flags.go:64] FLAG: --audit-policy-file="/var/lib/microshift/resources/kube-apiserver-audit-policies/default.yaml" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297850 132400 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297853 132400 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297856 132400 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297859 132400 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297862 132400 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297865 132400 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297868 132400 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297872 132400 flags.go:64] FLAG: --audit-webhook-config-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297875 132400 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297879 132400 flags.go:64] FLAG: --audit-webhook-mode="batch" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297882 132400 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297885 132400 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297888 132400 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297891 132400 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297893 132400 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297897 132400 flags.go:64] FLAG: --authentication-token-webhook-config-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297900 132400 flags.go:64] FLAG: --authentication-token-webhook-version="v1beta1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297903 132400 flags.go:64] FLAG: --authorization-mode="[Scope,SystemMasters,RBAC,Node]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297908 132400 flags.go:64] FLAG: --authorization-policy-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297911 132400 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297914 132400 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297917 132400 flags.go:64] FLAG: --authorization-webhook-config-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297920 132400 flags.go:64] FLAG: --authorization-webhook-version="v1beta1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297923 132400 flags.go:64] FLAG: --bind-address="0.0.0.0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297927 132400 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297931 132400 flags.go:64] FLAG: --client-ca-file="/var/lib/microshift/certs/ca-bundle/client-ca.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297934 132400 flags.go:64] FLAG: --cloud-config="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297937 132400 flags.go:64] FLAG: --cloud-provider="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297940 132400 flags.go:64] FLAG: --contention-profiling="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297943 132400 flags.go:64] FLAG: --cors-allowed-origins="[//127\\.0\\.0\\.1(:|$),//localhost(:|$)]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297948 132400 flags.go:64] FLAG: --default-not-ready-toleration-seconds="300" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297952 132400 flags.go:64] FLAG: --default-unreachable-toleration-seconds="300" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297955 132400 flags.go:64] FLAG: --default-watch-cache-size="100" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297957 132400 flags.go:64] FLAG: --delete-collection-workers="1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297960 132400 flags.go:64] FLAG: --disable-admission-plugins="[authorization.openshift.io/RestrictSubjectBindings,authorization.openshift.io/ValidateRoleBindingRestriction,autoscaling.openshift.io/ManagementCPUsOverride,config.openshift.io/DenyDeleteClusterConfiguration,config.openshift.io/ValidateAPIServer,config.openshift.io/ValidateAuthentication,config.openshift.io/ValidateConsole,config.openshift.io/ValidateFeatureGate,config.openshift.io/ValidateImage,config.openshift.io/ValidateOAuth,config.openshift.io/ValidateProject,config.openshift.io/ValidateScheduler,image.openshift.io/ImagePolicy,quota.openshift.io/ClusterResourceQuota,quota.openshift.io/ValidateClusterResourceQuota]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297972 132400 flags.go:64] FLAG: --disabled-metrics="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297976 132400 flags.go:64] FLAG: --egress-selector-config-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297978 132400 flags.go:64] FLAG: --enable-admission-plugins="[CertificateApproval,CertificateSigning,CertificateSubjectRestriction,DefaultIngressClass,DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,MutatingAdmissionWebhook,NamespaceLifecycle,NodeRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,PersistentVolumeLabel,PodNodeSelector,PodTolerationRestriction,Priority,ResourceQuota,RuntimeClass,ServiceAccount,StorageObjectInUseProtection,TaintNodesByCondition,ValidatingAdmissionWebhook,network.openshift.io/ExternalIPRanger,network.openshift.io/RestrictedEndpointsAdmission,route.openshift.io/IngressAdmission,scheduling.openshift.io/OriginPodNodeEnvironment,security.openshift.io/DefaultSecurityContextConstraints,security.openshift.io/SCCExecRestrictions,security.openshift.io/SecurityContextConstraint,security.openshift.io/ValidateSecurityContextConstraints,storage.openshift.io/CSIInlineVolumeSecurity]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297995 132400 flags.go:64] FLAG: --enable-aggregator-routing="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.297998 132400 flags.go:64] FLAG: --enable-bootstrap-token-auth="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298001 132400 flags.go:64] FLAG: --enable-garbage-collector="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298006 132400 flags.go:64] FLAG: --enable-logs-handler="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298009 132400 flags.go:64] FLAG: --enable-priority-and-fairness="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298012 132400 flags.go:64] FLAG: --encryption-provider-config="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298015 132400 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298018 132400 flags.go:64] FLAG: --endpoint-reconciler-type="lease" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298020 132400 flags.go:64] FLAG: --etcd-cafile="/var/lib/microshift/certs/etcd-signer/ca.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298023 132400 flags.go:64] FLAG: --etcd-certfile="/var/lib/microshift/certs/etcd-signer/apiserver-etcd-client/client.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298027 132400 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298031 132400 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298035 132400 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298038 132400 flags.go:64] FLAG: --etcd-healthcheck-timeout="2s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298041 132400 flags.go:64] FLAG: --etcd-keyfile="/var/lib/microshift/certs/etcd-signer/apiserver-etcd-client/client.key" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298044 132400 flags.go:64] FLAG: --etcd-prefix="kubernetes.io" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298047 132400 flags.go:64] FLAG: --etcd-readycheck-timeout="2s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298050 132400 flags.go:64] FLAG: --etcd-servers="[https://127.0.0.1:2379]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298054 132400 flags.go:64] FLAG: --etcd-servers-overrides="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298057 132400 flags.go:64] FLAG: --event-ttl="3h0m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298061 132400 flags.go:64] FLAG: --external-hostname="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298064 132400 flags.go:64] FLAG: --feature-gates="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298068 132400 flags.go:64] FLAG: --goaway-chance="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298073 132400 flags.go:64] FLAG: --help="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298077 132400 flags.go:64] FLAG: --http2-max-streams-per-connection="2000" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298080 132400 flags.go:64] FLAG: --kubelet-certificate-authority="/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca-bundle.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298083 132400 flags.go:64] FLAG: --kubelet-client-certificate="/var/lib/microshift/certs/kube-apiserver-to-kubelet-client-signer/kube-apiserver-to-kubelet-client/client.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298087 132400 flags.go:64] FLAG: --kubelet-client-key="/var/lib/microshift/certs/kube-apiserver-to-kubelet-client-signer/kube-apiserver-to-kubelet-client/client.key" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298091 132400 flags.go:64] FLAG: --kubelet-port="10250" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298094 132400 flags.go:64] FLAG: --kubelet-preferred-address-types="[InternalIP]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298098 132400 flags.go:64] FLAG: --kubelet-read-only-port="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298100 132400 flags.go:64] FLAG: --kubelet-timeout="5s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298103 132400 flags.go:64] FLAG: --kubernetes-service-node-port="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298106 132400 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298109 132400 flags.go:64] FLAG: --livez-grace-period="0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298112 132400 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298116 132400 flags.go:64] FLAG: --logging-format="text" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298119 132400 flags.go:64] FLAG: --master-service-namespace="default" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298122 132400 flags.go:64] FLAG: --max-connection-bytes-per-sec="0" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298125 132400 flags.go:64] FLAG: --max-mutating-requests-inflight="1000" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298128 132400 flags.go:64] FLAG: --max-requests-inflight="3000" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298131 132400 flags.go:64] FLAG: --min-request-timeout="3600" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298134 132400 flags.go:64] FLAG: --oidc-ca-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298137 132400 flags.go:64] FLAG: --oidc-client-id="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298140 132400 flags.go:64] FLAG: --oidc-groups-claim="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298143 132400 flags.go:64] FLAG: --oidc-groups-prefix="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298145 132400 flags.go:64] FLAG: --oidc-issuer-url="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298148 132400 flags.go:64] FLAG: --oidc-required-claim="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298152 132400 flags.go:64] FLAG: --oidc-signing-algs="[RS256]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298156 132400 flags.go:64] FLAG: --oidc-username-claim="sub" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298159 132400 flags.go:64] FLAG: --oidc-username-prefix="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298162 132400 flags.go:64] FLAG: --openshift-config="/tmp/kube-apiserver-config-1559160234.yaml" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298165 132400 flags.go:64] FLAG: --permit-address-sharing="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298169 132400 flags.go:64] FLAG: --permit-port-sharing="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298172 132400 flags.go:64] FLAG: --profiling="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298175 132400 flags.go:64] FLAG: --proxy-client-cert-file="/var/lib/microshift/certs/aggregator-signer/aggregator-client/client.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298179 132400 flags.go:64] FLAG: --proxy-client-key-file="/var/lib/microshift/certs/aggregator-signer/aggregator-client/client.key" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298183 132400 flags.go:64] FLAG: --request-timeout="1m0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298186 132400 flags.go:64] FLAG: --requestheader-allowed-names="[kube-apiserver-proxy,system:kube-apiserver-proxy,system:openshift-aggregator]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298192 132400 flags.go:64] FLAG: --requestheader-client-ca-file="/var/lib/microshift/certs/aggregator-signer/ca.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298201 132400 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[X-Remote-Extra-]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298204 132400 flags.go:64] FLAG: --requestheader-group-headers="[X-Remote-Group]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298209 132400 flags.go:64] FLAG: --requestheader-username-headers="[X-Remote-User]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298213 132400 flags.go:64] FLAG: --runtime-config="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298218 132400 flags.go:64] FLAG: --secure-port="6443" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298221 132400 flags.go:64] FLAG: --send-retry-after-while-not-ready-once="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298224 132400 flags.go:64] FLAG: --service-account-extend-token-expiration="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298227 132400 flags.go:64] FLAG: --service-account-issuer="[https://kubernetes.default.svc]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298232 132400 flags.go:64] FLAG: --service-account-jwks-uri="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298235 132400 flags.go:64] FLAG: --service-account-key-file="[/var/lib/microshift/resources/kube-apiserver/secrets/service-account-key/service-account.pub]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298240 132400 flags.go:64] FLAG: --service-account-lookup="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298243 132400 flags.go:64] FLAG: --service-account-max-token-expiration="0s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298246 132400 flags.go:64] FLAG: --service-account-signing-key-file="/var/lib/microshift/resources/kube-apiserver/secrets/service-account-key/service-account.key" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298251 132400 flags.go:64] FLAG: --service-cluster-ip-range="10.43.0.0/16" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298254 132400 flags.go:64] FLAG: --service-node-port-range="30000-32767" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298259 132400 flags.go:64] FLAG: --show-hidden-metrics-for-version="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298266 132400 flags.go:64] FLAG: --shutdown-delay-duration="1m10s" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298269 132400 flags.go:64] FLAG: --shutdown-send-retry-after="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298272 132400 flags.go:64] FLAG: --storage-backend="etcd3" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298275 132400 flags.go:64] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298278 132400 flags.go:64] FLAG: --strict-transport-security-directives="[max-age=31536000,includeSubDomains,preload]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298283 132400 flags.go:64] FLAG: --tls-cert-file="/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298288 132400 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298296 132400 flags.go:64] FLAG: --tls-min-version="VersionTLS12" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298299 132400 flags.go:64] FLAG: --tls-private-key-file="/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298304 132400 flags.go:64] FLAG: --tls-sni-cert-key="[/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key;/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key;/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298317 132400 flags.go:64] FLAG: --token-auth-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298320 132400 flags.go:64] FLAG: --tracing-config-file="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298323 132400 flags.go:64] FLAG: --v="2" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298327 132400 flags.go:64] FLAG: --version="false" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298331 132400 flags.go:64] FLAG: --vmodule="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298335 132400 flags.go:64] FLAG: --watch-cache="true" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298342 132400 flags.go:64] FLAG: --watch-cache-sizes="[]" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298368 132400 server.go:612] external host was not specified, using 192.168.122.17 Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298788 132400 server.go:203] Version: v1.26.0 Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.298799 132400 server.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.299088 132400 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.299213 132400 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.299388 132400 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.299574 132400 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.497021 132400 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.497219 132400 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.497553 132400 shared_informer.go:273] Waiting for caches to sync for node_authorizer Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.498071 132400 audit.go:350] Using audit backend: ignoreErrors Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.499049 132400 admission.go:77] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.499180 132400 admission.go:47] Admission plugin "autoscaling.openshift.io/ClusterResourceOverride" is not configured so it will be disabled. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.499211 132400 admission.go:33] Admission plugin "autoscaling.openshift.io/RunOnceDuration" is not configured so it will be disabled. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.499235 132400 admission.go:32] Admission plugin "scheduling.openshift.io/PodNodeConstraints" is not configured so it will be disabled. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.499380 132400 endpoint_admission.go:33] Admission plugin "network.openshift.io/RestrictedEndpointsAdmission" is not configured so it will be disabled. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.499981 132400 plugins.go:158] Loaded 20 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,PodNodeSelector,Priority,DefaultTolerationSeconds,PodTolerationRestriction,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,scheduling.openshift.io/OriginPodNodeEnvironment,security.openshift.io/SecurityContextConstraint,route.openshift.io/RouteHostAssignment,route.openshift.io/DefaultRoute,security.openshift.io/DefaultSecurityContextConstraints,MutatingAdmissionWebhook. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.500039 132400 plugins.go:161] Loaded 28 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,PodNodeSelector,Priority,PodTolerationRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,scheduling.openshift.io/OriginPodNodeEnvironment,network.openshift.io/ExternalIPRanger,security.openshift.io/SecurityContextConstraint,security.openshift.io/SCCExecRestrictions,route.openshift.io/IngressAdmission,storage.openshift.io/CSIInlineVolumeSecurity,operator.openshift.io/ValidateDNS,security.openshift.io/ValidateSecurityContextConstraints,config.openshift.io/ValidateNetwork,config.openshift.io/ValidateAPIRequestCount,config.openshift.io/RestrictExtremeWorkerLatencyProfile,route.openshift.io/ValidateRoute,operator.openshift.io/ValidateKubeControllerManager,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.500090 132400 apf_controller.go:279] NewTestableController "Controller" with serverConcurrencyLimit=4000, requestWaitLimit=15s, name=Controller, asFieldManager="api-priority-and-fairness-config-consumer-v1" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.500148 132400 apf_controller.go:846] Introducing queues for priority level "catch-all": config={"type":"Limited","limited":{"nominalConcurrencyShares":5,"limitResponse":{"type":"Reject"},"lendablePercent":0}}, nominalCL=4000, lendableCL=0, borrowingCL=4000, currentCL=4000, quiescing=false (shares=5, shareSum=5) Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.508079 132400 store.go:1482] "Monitoring resource count at path" resource="events" path="//events" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.508207 132400 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.508266 132400 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.517236 132400 store.go:1482] "Monitoring resource count at path" resource="customresourcedefinitions.apiextensions.k8s.io" path="//apiextensions.k8s.io/customresourcedefinitions" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.521537 132400 cacher.go:435] cacher (customresourcedefinitions.apiextensions.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.530718 132400 genericapiserver.go:694] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.531273 132400 instance.go:277] Using reconciler: lease Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.541567 132400 store.go:1482] "Monitoring resource count at path" resource="podtemplates" path="//podtemplates" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.542151 132400 cacher.go:435] cacher (podtemplates): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.545190 132400 store.go:1482] "Monitoring resource count at path" resource="events" path="//events" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.548769 132400 store.go:1482] "Monitoring resource count at path" resource="limitranges" path="//limitranges" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.549099 132400 cacher.go:435] cacher (limitranges): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.552650 132400 store.go:1482] "Monitoring resource count at path" resource="resourcequotas" path="//resourcequotas" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.552963 132400 cacher.go:435] cacher (resourcequotas): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.556298 132400 store.go:1482] "Monitoring resource count at path" resource="secrets" path="//secrets" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.556905 132400 cacher.go:435] cacher (secrets): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.560318 132400 store.go:1482] "Monitoring resource count at path" resource="persistentvolumes" path="//persistentvolumes" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.560700 132400 cacher.go:435] cacher (persistentvolumes): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.564620 132400 store.go:1482] "Monitoring resource count at path" resource="persistentvolumeclaims" path="//persistentvolumeclaims" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.565472 132400 cacher.go:435] cacher (persistentvolumeclaims): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.568580 132400 store.go:1482] "Monitoring resource count at path" resource="configmaps" path="//configmaps" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.572185 132400 cacher.go:435] cacher (configmaps): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.572700 132400 store.go:1482] "Monitoring resource count at path" resource="namespaces" path="//namespaces" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.573261 132400 cacher.go:435] cacher (namespaces): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.577179 132400 store.go:1482] "Monitoring resource count at path" resource="endpoints" path="//services/endpoints" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.577647 132400 cacher.go:435] cacher (endpoints): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.581482 132400 store.go:1482] "Monitoring resource count at path" resource="nodes" path="//minions" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.582191 132400 cacher.go:435] cacher (nodes): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.585324 132400 store.go:1482] "Monitoring resource count at path" resource="pods" path="//pods" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.588688 132400 store.go:1482] "Monitoring resource count at path" resource="serviceaccounts" path="//serviceaccounts" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.591931 132400 cacher.go:435] cacher (pods): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.592258 132400 cacher.go:435] cacher (serviceaccounts): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.599612 132400 store.go:1482] "Monitoring resource count at path" resource="replicationcontrollers" path="//controllers" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.600400 132400 cacher.go:435] cacher (replicationcontrollers): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.603354 132400 store.go:1482] "Monitoring resource count at path" resource="services" path="//services/specs" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.606326 132400 cacher.go:435] cacher (services): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.644182 132400 instance.go:622] API group "internal.apiserver.k8s.io" is not enabled, skipping. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.644338 132400 instance.go:635] Enabling API group "authentication.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.644381 132400 instance.go:635] Enabling API group "authorization.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.648359 132400 store.go:1482] "Monitoring resource count at path" resource="horizontalpodautoscalers.autoscaling" path="//horizontalpodautoscalers" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.649004 132400 cacher.go:435] cacher (horizontalpodautoscalers.autoscaling): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.652200 132400 store.go:1482] "Monitoring resource count at path" resource="horizontalpodautoscalers.autoscaling" path="//horizontalpodautoscalers" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.652550 132400 cacher.go:435] cacher (horizontalpodautoscalers.autoscaling): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.656074 132400 store.go:1482] "Monitoring resource count at path" resource="horizontalpodautoscalers.autoscaling" path="//horizontalpodautoscalers" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.656096 132400 deleted_kinds.go:173] Removing resource horizontalpodautoscalers.v2beta2.autoscaling because it is time to stop serving it per APILifecycle. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.656101 132400 deleted_kinds.go:173] Removing resource horizontalpodautoscalers/status.v2beta2.autoscaling because it is time to stop serving it per APILifecycle. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.656110 132400 deleted_kinds.go:184] Removing version v2beta2.autoscaling because it is time to stop serving it because it has no resources per APILifecycle. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.656114 132400 instance.go:635] Enabling API group "autoscaling". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.656420 132400 cacher.go:435] cacher (horizontalpodautoscalers.autoscaling): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.659878 132400 store.go:1482] "Monitoring resource count at path" resource="jobs.batch" path="//jobs" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.660179 132400 cacher.go:435] cacher (jobs.batch): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.663488 132400 store.go:1482] "Monitoring resource count at path" resource="cronjobs.batch" path="//cronjobs" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.663559 132400 instance.go:635] Enabling API group "batch". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.664269 132400 cacher.go:435] cacher (cronjobs.batch): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.667412 132400 store.go:1482] "Monitoring resource count at path" resource="certificatesigningrequests.certificates.k8s.io" path="//certificatesigningrequests" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.667651 132400 instance.go:635] Enabling API group "certificates.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.667895 132400 cacher.go:435] cacher (certificatesigningrequests.certificates.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.671710 132400 store.go:1482] "Monitoring resource count at path" resource="leases.coordination.k8s.io" path="//leases" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.672223 132400 instance.go:635] Enabling API group "coordination.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.672157 132400 cacher.go:435] cacher (leases.coordination.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.676296 132400 store.go:1482] "Monitoring resource count at path" resource="endpointslices.discovery.k8s.io" path="//endpointslices" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.676800 132400 instance.go:635] Enabling API group "discovery.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.676731 132400 cacher.go:435] cacher (endpointslices.discovery.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.680870 132400 store.go:1482] "Monitoring resource count at path" resource="networkpolicies.networking.k8s.io" path="//networkpolicies" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.681209 132400 cacher.go:435] cacher (networkpolicies.networking.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.685128 132400 store.go:1482] "Monitoring resource count at path" resource="ingresses.networking.k8s.io" path="//ingress" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.685515 132400 cacher.go:435] cacher (ingresses.networking.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.689312 132400 store.go:1482] "Monitoring resource count at path" resource="ingressclasses.networking.k8s.io" path="//ingressclasses" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.689613 132400 cacher.go:435] cacher (ingressclasses.networking.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.689921 132400 instance.go:635] Enabling API group "networking.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.693334 132400 store.go:1482] "Monitoring resource count at path" resource="runtimeclasses.node.k8s.io" path="//runtimeclasses" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.693406 132400 instance.go:635] Enabling API group "node.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.693688 132400 cacher.go:435] cacher (runtimeclasses.node.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.697213 132400 store.go:1482] "Monitoring resource count at path" resource="poddisruptionbudgets.policy" path="//poddisruptionbudgets" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.697233 132400 instance.go:635] Enabling API group "policy". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.699754 132400 cacher.go:435] cacher (poddisruptionbudgets.policy): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.700533 132400 store.go:1482] "Monitoring resource count at path" resource="roles.rbac.authorization.k8s.io" path="//roles" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.701281 132400 cacher.go:435] cacher (roles.rbac.authorization.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.704750 132400 store.go:1482] "Monitoring resource count at path" resource="rolebindings.rbac.authorization.k8s.io" path="//rolebindings" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.707856 132400 cacher.go:435] cacher (rolebindings.rbac.authorization.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.708728 132400 store.go:1482] "Monitoring resource count at path" resource="clusterroles.rbac.authorization.k8s.io" path="//clusterroles" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.710920 132400 cacher.go:435] cacher (clusterroles.rbac.authorization.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.712720 132400 store.go:1482] "Monitoring resource count at path" resource="clusterrolebindings.rbac.authorization.k8s.io" path="//clusterrolebindings" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.712807 132400 instance.go:635] Enabling API group "rbac.authorization.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.713539 132400 cacher.go:435] cacher (clusterrolebindings.rbac.authorization.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.717990 132400 store.go:1482] "Monitoring resource count at path" resource="priorityclasses.scheduling.k8s.io" path="//priorityclasses" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.718054 132400 instance.go:635] Enabling API group "scheduling.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.718517 132400 cacher.go:435] cacher (priorityclasses.scheduling.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.722030 132400 store.go:1482] "Monitoring resource count at path" resource="csistoragecapacities.storage.k8s.io" path="//csistoragecapacities" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.722564 132400 cacher.go:435] cacher (csistoragecapacities.storage.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.726009 132400 store.go:1482] "Monitoring resource count at path" resource="storageclasses.storage.k8s.io" path="//storageclasses" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.727719 132400 cacher.go:435] cacher (storageclasses.storage.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.729728 132400 store.go:1482] "Monitoring resource count at path" resource="volumeattachments.storage.k8s.io" path="//volumeattachments" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.734159 132400 store.go:1482] "Monitoring resource count at path" resource="csinodes.storage.k8s.io" path="//csinodes" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.735239 132400 cacher.go:435] cacher (volumeattachments.storage.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.735988 132400 cacher.go:435] cacher (csinodes.storage.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.745226 132400 store.go:1482] "Monitoring resource count at path" resource="csidrivers.storage.k8s.io" path="//csidrivers" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.747568 132400 cacher.go:435] cacher (csidrivers.storage.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.750585 132400 store.go:1482] "Monitoring resource count at path" resource="csistoragecapacities.storage.k8s.io" path="//csistoragecapacities" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.750942 132400 cacher.go:435] cacher (csistoragecapacities.storage.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.751009 132400 instance.go:635] Enabling API group "storage.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.755231 132400 store.go:1482] "Monitoring resource count at path" resource="flowschemas.flowcontrol.apiserver.k8s.io" path="//flowschemas" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.755852 132400 cacher.go:435] cacher (flowschemas.flowcontrol.apiserver.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.759570 132400 store.go:1482] "Monitoring resource count at path" resource="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io" path="//prioritylevelconfigurations" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.760092 132400 cacher.go:435] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.767047 132400 store.go:1482] "Monitoring resource count at path" resource="flowschemas.flowcontrol.apiserver.k8s.io" path="//flowschemas" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.767630 132400 cacher.go:435] cacher (flowschemas.flowcontrol.apiserver.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.770940 132400 store.go:1482] "Monitoring resource count at path" resource="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io" path="//prioritylevelconfigurations" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.772853 132400 cacher.go:435] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.775000 132400 store.go:1482] "Monitoring resource count at path" resource="flowschemas.flowcontrol.apiserver.k8s.io" path="//flowschemas" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.777874 132400 cacher.go:435] cacher (flowschemas.flowcontrol.apiserver.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.778477 132400 store.go:1482] "Monitoring resource count at path" resource="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io" path="//prioritylevelconfigurations" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.778522 132400 deleted_kinds.go:173] Removing resource prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io because it is time to stop serving it per APILifecycle. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.778528 132400 deleted_kinds.go:173] Removing resource prioritylevelconfigurations/status.v1beta1.flowcontrol.apiserver.k8s.io because it is time to stop serving it per APILifecycle. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.778532 132400 deleted_kinds.go:173] Removing resource flowschemas.v1beta1.flowcontrol.apiserver.k8s.io because it is time to stop serving it per APILifecycle. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.778535 132400 deleted_kinds.go:173] Removing resource flowschemas/status.v1beta1.flowcontrol.apiserver.k8s.io because it is time to stop serving it per APILifecycle. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.778539 132400 deleted_kinds.go:184] Removing version v1beta1.flowcontrol.apiserver.k8s.io because it is time to stop serving it because it has no resources per APILifecycle. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.778542 132400 instance.go:635] Enabling API group "flowcontrol.apiserver.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.780140 132400 cacher.go:435] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.782431 132400 store.go:1482] "Monitoring resource count at path" resource="deployments.apps" path="//deployments" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.783112 132400 cacher.go:435] cacher (deployments.apps): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.786796 132400 store.go:1482] "Monitoring resource count at path" resource="statefulsets.apps" path="//statefulsets" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.791344 132400 cacher.go:435] cacher (statefulsets.apps): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.793290 132400 store.go:1482] "Monitoring resource count at path" resource="daemonsets.apps" path="//daemonsets" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.805211 132400 cacher.go:435] cacher (daemonsets.apps): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.805538 132400 store.go:1482] "Monitoring resource count at path" resource="replicasets.apps" path="//replicasets" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.806315 132400 cacher.go:435] cacher (replicasets.apps): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.810174 132400 store.go:1482] "Monitoring resource count at path" resource="controllerrevisions.apps" path="//controllerrevisions" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.810707 132400 instance.go:635] Enabling API group "apps". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.811093 132400 cacher.go:435] cacher (controllerrevisions.apps): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.814830 132400 store.go:1482] "Monitoring resource count at path" resource="validatingwebhookconfigurations.admissionregistration.k8s.io" path="//validatingwebhookconfigurations" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.815139 132400 cacher.go:435] cacher (validatingwebhookconfigurations.admissionregistration.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.818616 132400 store.go:1482] "Monitoring resource count at path" resource="mutatingwebhookconfigurations.admissionregistration.k8s.io" path="//mutatingwebhookconfigurations" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.818651 132400 instance.go:635] Enabling API group "admissionregistration.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.819441 132400 cacher.go:435] cacher (mutatingwebhookconfigurations.admissionregistration.k8s.io): initialized Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.822211 132400 store.go:1482] "Monitoring resource count at path" resource="events" path="//events" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.822236 132400 instance.go:635] Enabling API group "events.k8s.io". Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.822243 132400 instance.go:622] API group "resource.k8s.io" is not enabled, skipping. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.872237 132400 genericapiserver.go:694] Skipping API authentication.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.872365 132400 genericapiserver.go:694] Skipping API authentication.k8s.io/v1alpha1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.873193 132400 genericapiserver.go:694] Skipping API authorization.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.875006 132400 genericapiserver.go:694] Skipping API autoscaling/v2beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.875077 132400 genericapiserver.go:694] Skipping API autoscaling/v2beta2 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.876365 132400 genericapiserver.go:694] Skipping API batch/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.877258 132400 genericapiserver.go:694] Skipping API certificates.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.878087 132400 genericapiserver.go:694] Skipping API coordination.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.878151 132400 genericapiserver.go:694] Skipping API discovery.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.880216 132400 genericapiserver.go:694] Skipping API networking.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.880267 132400 genericapiserver.go:694] Skipping API networking.k8s.io/v1alpha1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.881162 132400 genericapiserver.go:694] Skipping API node.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.881213 132400 genericapiserver.go:694] Skipping API node.k8s.io/v1alpha1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.881252 132400 genericapiserver.go:694] Skipping API policy/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.883441 132400 genericapiserver.go:694] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.883486 132400 genericapiserver.go:694] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.884235 132400 genericapiserver.go:694] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.884283 132400 genericapiserver.go:694] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.886525 132400 genericapiserver.go:694] Skipping API storage.k8s.io/v1alpha1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.888735 132400 genericapiserver.go:694] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.888785 132400 genericapiserver.go:694] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.890965 132400 genericapiserver.go:694] Skipping API apps/v1beta2 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.891018 132400 genericapiserver.go:694] Skipping API apps/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.892051 132400 genericapiserver.go:694] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.892095 132400 genericapiserver.go:694] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.892890 132400 genericapiserver.go:694] Skipping API events.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.898326 132400 store.go:1482] "Monitoring resource count at path" resource="apiservices.apiregistration.k8s.io" path="//apiregistration.k8s.io/apiservices" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:16.916583 132400 genericapiserver.go:694] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.916888 132400 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="aggregator-proxy-cert::/var/lib/microshift/certs/aggregator-signer/aggregator-client/client.crt::/var/lib/microshift/certs/aggregator-signer/aggregator-client/client.key" Feb 13 04:05:16 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:16.919415 132400 cacher.go:435] cacher (apiservices.apiregistration.k8s.io): initialized Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.160930 132400 aggregator.go:115] Building initial OpenAPI spec Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.169954 132400 aggregator.go:118] Finished initial OpenAPI spec generation after 8.859736ms Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.304712 132400 genericapiserver.go:539] "[graceful-termination] using HTTP Server shutdown timeout" ShutdownTimeout="2s" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305099 132400 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305150 132400 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305197 132400 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305301 132400 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305388 132400 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305428 132400 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt,request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" certDetail="\"kube-control-plane-signer\" [] issuer=\"\" (2023-02-13 06:24:28 +0000 UTC to 2024-02-13 06:24:29 +0000 UTC (now=2023-02-13 09:05:17.305383339 +0000 UTC))" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305465 132400 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt,request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" certDetail="\"kube-apiserver-to-kubelet-signer\" [] issuer=\"\" (2023-02-13 06:24:29 +0000 UTC to 2024-02-13 06:24:30 +0000 UTC (now=2023-02-13 09:05:17.305457327 +0000 UTC))" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305493 132400 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt,request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2023-02-13 06:24:29 +0000 UTC to 2033-02-10 06:24:30 +0000 UTC (now=2023-02-13 09:05:17.305486612 +0000 UTC))" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305523 132400 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt,request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" certDetail="\"kubelet-signer\" [] issuer=\"\" (2023-02-13 06:24:29 +0000 UTC to 2024-02-13 06:24:30 +0000 UTC (now=2023-02-13 09:05:17.305516725 +0000 UTC))" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305564 132400 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt,request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" certDetail="\"kube-csr-signer\" [] issuer=\"kubelet-signer\" (2023-02-13 06:24:29 +0000 UTC to 2024-02-13 06:24:30 +0000 UTC (now=2023-02-13 09:05:17.305548254 +0000 UTC))" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305594 132400 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt,request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" certDetail="\"aggregator-signer\" [] issuer=\"\" (2023-02-13 06:24:30 +0000 UTC to 2024-02-13 06:24:31 +0000 UTC (now=2023-02-13 09:05:17.305587795 +0000 UTC))" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305526 132400 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305732 132400 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key" certDetail="\"10.43.0.1\" [serving] validServingFor=[10.43.0.1,api-int.example.com,api.example.com,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local,10.43.0.1] issuer=\"kube-apiserver-service-network-signer\" (2023-02-13 06:24:32 +0000 UTC to 2024-02-13 06:24:33 +0000 UTC (now=2023-02-13 09:05:17.305722083 +0000 UTC))" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305817 132400 named_certificates.go:53] "Loaded SNI cert" index=3 certName="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key" certDetail="\"10.43.0.1\" [serving] validServingFor=[10.43.0.1,api-int.example.com,api.example.com,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local,10.43.0.1] issuer=\"kube-apiserver-service-network-signer\" (2023-02-13 06:24:32 +0000 UTC to 2024-02-13 06:24:33 +0000 UTC (now=2023-02-13 09:05:17.305806968 +0000 UTC))" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305908 132400 named_certificates.go:53] "Loaded SNI cert" index=2 certName="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key" certDetail="\"127.0.0.1\" [serving] validServingFor=[127.0.0.1,localhost,127.0.0.1] issuer=\"kube-apiserver-localhost-signer\" (2023-02-13 06:24:31 +0000 UTC to 2024-02-13 06:24:32 +0000 UTC (now=2023-02-13 09:05:17.305897262 +0000 UTC))" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.305986 132400 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key" certDetail="\"api.example.com\" [serving] validServingFor=[api.example.com,localhost.localdomain] issuer=\"kube-apiserver-external-signer\" (2023-02-13 06:24:30 +0000 UTC to 2024-02-13 06:24:31 +0000 UTC (now=2023-02-13 09:05:17.305978149 +0000 UTC))" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306067 132400 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1676279116\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1676279116\" (2023-02-13 08:05:16 +0000 UTC to 2024-02-13 08:05:16 +0000 UTC (now=2023-02-13 09:05:17.306060499 +0000 UTC))" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306114 132400 secure_serving.go:210] Serving securely on [::]:6443 Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306135 132400 tlsconfig.go:240] "Starting DynamicServingCertificateController" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306169 132400 genericapiserver.go:620] [graceful-termination] waiting for shutdown to be initiated Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306557 132400 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306610 132400 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306724 132400 controller.go:121] Starting legacy_token_tracking_controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306746 132400 shared_informer.go:273] Waiting for caches to sync for configmaps Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306774 132400 available_controller.go:516] Starting AvailableConditionController Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306794 132400 cache.go:32] Waiting for caches to sync for AvailableConditionController controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306816 132400 controller.go:80] Starting OpenAPI V3 AggregationController Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.306842 132400 controller.go:83] Starting OpenAPI AggregationController Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.307015 132400 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/microshift/certs/aggregator-signer/aggregator-client/client.crt::/var/lib/microshift/certs/aggregator-signer/aggregator-client/client.key" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.307103 132400 apiservice_controller.go:97] Starting APIServiceRegistrationController Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.307130 132400 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.307165 132400 autoregister_controller.go:141] Starting autoregister controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.307184 132400 cache.go:32] Waiting for caches to sync for autoregister controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.307350 132400 customresource_discovery_controller.go:288] Starting DiscoveryController Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.308433 132400 apf_controller.go:361] Starting API Priority and Fairness config controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.308535 132400 gc_controller.go:78] Starting apiserver lease garbage collector Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.308574 132400 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.308625 132400 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.308711 132400 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.309086 132400 crdregistration_controller.go:112] Starting crd-autoregister controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.309129 132400 shared_informer.go:273] Waiting for caches to sync for crd-autoregister Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.309170 132400 controller.go:85] Starting OpenAPI controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.309205 132400 controller.go:85] Starting OpenAPI V3 controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.309237 132400 naming_controller.go:291] Starting NamingConditionController Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.309264 132400 establishing_controller.go:76] Starting EstablishingController Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.309291 132400 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.309314 132400 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.309347 132400 crd_finalizer.go:266] Starting CRDFinalizer Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.311909 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.312351 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/services" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.315797 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/endpoints" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.316107 132400 patch_genericapiserver.go:126] Loopback request to "/apis/storage.k8s.io/v1/storageclasses" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.316361 132400 patch_genericapiserver.go:126] Loopback request to "/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.316591 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/serviceaccounts" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.316875 132400 patch_genericapiserver.go:126] Loopback request to "/apis/quota.openshift.io/v1/clusterresourcequotas" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.317079 132400 patch_genericapiserver.go:126] Loopback request to "/apis/security.openshift.io/v1/securitycontextconstraints" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.317286 132400 patch_genericapiserver.go:126] Loopback request to "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.320055 132400 patch_genericapiserver.go:126] Loopback request to "/apis/apiextensions.k8s.io/v1/customresourcedefinitions" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.320488 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces/openshift-apiserver/endpoints/api" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.320724 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/pods" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.321092 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces/openshift-oauth-apiserver/endpoints/api" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.321272 132400 patch_genericapiserver.go:126] Loopback request to "/apis/user.openshift.io/v1/groups" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.321472 132400 patch_genericapiserver.go:126] Loopback request to "/apis/apiregistration.k8s.io/v1/apiservices" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.321752 132400 patch_genericapiserver.go:126] Loopback request to "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-apiserver-cunuhu4kzt3e7ixnoxjgpthiy4" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.321920 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces/kube-system/configmaps" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.322140 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces/kube-system/configmaps" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.322405 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/roles" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.322634 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/limitranges" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.322919 132400 patch_genericapiserver.go:126] Loopback request to "/apis/node.k8s.io/v1/runtimeclasses" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.323137 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/secrets" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.323347 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.323777 132400 patch_genericapiserver.go:126] Loopback request to "/apis/scheduling.k8s.io/v1/priorityclasses" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.324032 132400 patch_genericapiserver.go:126] Loopback request to "/apis/networking.k8s.io/v1/ingressclasses" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.324252 132400 patch_genericapiserver.go:126] Loopback request to "/apis/storage.k8s.io/v1/csidrivers" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.324526 132400 patch_genericapiserver.go:126] Loopback request to "/apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.324781 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/rolebindings" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.325010 132400 patch_genericapiserver.go:126] Loopback request to "/apis/storage.k8s.io/v1/volumeattachments" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.325234 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/nodes" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.325459 132400 patch_genericapiserver.go:126] Loopback request to "/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.326309 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/resourcequotas" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.326541 132400 patch_genericapiserver.go:126] Loopback request to "/apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.326792 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/persistentvolumes" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.327051 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:17.333804 132400 sdn_readyz_wait.go:102] api.openshift-oauth-apiserver.svc endpoints were not found Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:17.333904 132400 sdn_readyz_wait.go:102] api.openshift-apiserver.svc endpoints were not found Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:17.334189 132400 sdn_readyz_wait.go:138] api-openshift-oauth-apiserver-available did not find an openshift-oauth-apiserver endpoint Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:17.334229 132400 sdn_readyz_wait.go:138] api-openshift-apiserver-available did not find an openshift-apiserver endpoint Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.336684 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/serviceaccounts" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.337004 132400 patch_genericapiserver.go:126] Loopback request to "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.337233 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces/kube-system/configmaps" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.337478 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/limitranges" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.337698 132400 patch_genericapiserver.go:126] Loopback request to "/apis/node.k8s.io/v1/runtimeclasses" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.337879 132400 patch_genericapiserver.go:126] Loopback request to "/apis/scheduling.k8s.io/v1/priorityclasses" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.338084 132400 patch_genericapiserver.go:126] Loopback request to "/apis/networking.k8s.io/v1/ingressclasses" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.338262 132400 patch_genericapiserver.go:126] Loopback request to "/apis/storage.k8s.io/v1/csidrivers" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.338434 132400 patch_genericapiserver.go:126] Loopback request to "/apis/storage.k8s.io/v1/volumeattachments" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.338709 132400 patch_genericapiserver.go:126] Loopback request to "/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.338888 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/resourcequotas" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.339062 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/persistentvolumes" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.339264 132400 patch_genericapiserver.go:126] Loopback request to "/apis/apiextensions.k8s.io/v1/customresourcedefinitions" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.339438 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/pods" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.339610 132400 patch_genericapiserver.go:126] Loopback request to "/apis/apiregistration.k8s.io/v1/apiservices" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.339892 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces/kube-system/configmaps" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.340070 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/roles" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.340271 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/secrets" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.340447 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.340624 132400 patch_genericapiserver.go:126] Loopback request to "/apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.340820 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/rolebindings" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.341036 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/nodes" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.341204 132400 patch_genericapiserver.go:126] Loopback request to "/apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.341403 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.344507 132400 patch_genericapiserver.go:126] Loopback request to "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-apiserver-cunuhu4kzt3e7ixnoxjgpthiy4" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.345401 132400 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.345947 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.346188 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/services" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.346369 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/endpoints" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.346532 132400 patch_genericapiserver.go:126] Loopback request to "/apis/storage.k8s.io/v1/storageclasses" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.346689 132400 patch_genericapiserver.go:126] Loopback request to "/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.346773 132400 kube-apiserver.go:325] "kube-apiserver" not yet ready: unknown Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.358334 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces/default/endpoints/kubernetes" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:17.362159 132400 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.362375 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces/kube-system" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.363024 132400 healthz.go:261] etcd,etcd-readiness,poststarthook/start-apiextensions-controllers,poststarthook/crd-informer-synced,poststarthook/bootstrap-controller,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes,poststarthook/priority-and-fairness-config-producer,poststarthook/apiservice-registration-controller check failed: readyz Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]etcd failed: etcd client connection not yet established Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]etcd-readiness failed: etcd client connection not yet established Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/start-apiextensions-controllers failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/crd-informer-synced failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/bootstrap-controller failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/priority-and-fairness-config-producer failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/apiservice-registration-controller failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.363288 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/services" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.363427 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/services" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.363487 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces/kube-public" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.364631 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/namespaces/kube-node-lease" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.398295 132400 shared_informer.go:280] Caches are synced for node_authorizer Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.407567 132400 cache.go:39] Caches are synced for APIServiceRegistrationController controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.407691 132400 cache.go:39] Caches are synced for autoregister controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.408477 132400 patch_genericapiserver.go:126] Loopback request to "/apis/apiregistration.k8s.io/v1/apiservices/v1.security.internal.openshift.io" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.408823 132400 patch_genericapiserver.go:126] Loopback request to "/apis/apiregistration.k8s.io/v1/apiservices/v1.topolvm.io" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.408971 132400 patch_genericapiserver.go:126] Loopback request to "/apis/apiregistration.k8s.io/v1/apiservices/v1.route.openshift.io" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.409178 132400 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.409509 132400 shared_informer.go:280] Caches are synced for configmaps Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.409575 132400 cache.go:39] Caches are synced for AvailableConditionController controller Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.410828 132400 apf_controller.go:366] Running API Priority and Fairness config worker Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.410869 132400 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.410996 132400 shared_informer.go:280] Caches are synced for crd-autoregister Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.415648 132400 genericapiserver.go:490] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.416482 132400 cacher.go:779] cacher (apiservices.apiregistration.k8s.io): 1 objects queued in incoming channel. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.416495 132400 cacher.go:779] cacher (apiservices.apiregistration.k8s.io): 2 objects queued in incoming channel. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.422256 132400 patch_genericapiserver.go:126] Loopback request to "/apis/apiregistration.k8s.io/v1/apiservices" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.423407 132400 patch_genericapiserver.go:126] Loopback request to "/apis/apiregistration.k8s.io/v1/apiservices" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.427219 132400 patch_genericapiserver.go:126] Loopback request to "/apis/apiregistration.k8s.io/v1/apiservices" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429606 132400 apf_controller.go:846] Introducing queues for priority level "system": config={"type":"Limited","limited":{"nominalConcurrencyShares":30,"limitResponse":{"type":"Queue","queuing":{"queues":64,"handSize":6,"queueLengthLimit":50}},"lendablePercent":33}}, nominalCL=490, lendableCL=162, borrowingCL=4000, currentCL=409, quiescing=false (shares=30, shareSum=245) Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429635 132400 apf_controller.go:846] Introducing queues for priority level "workload-high": config={"type":"Limited","limited":{"nominalConcurrencyShares":40,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":50}}, nominalCL=654, lendableCL=327, borrowingCL=4000, currentCL=491, quiescing=false (shares=40, shareSum=245) Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429645 132400 apf_controller.go:846] Introducing queues for priority level "workload-low": config={"type":"Limited","limited":{"nominalConcurrencyShares":100,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":90}}, nominalCL=1633, lendableCL=1470, borrowingCL=4000, currentCL=898, quiescing=false (shares=100, shareSum=245) Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429663 132400 apf_controller.go:854] Retaining queues for priority level "catch-all": config={"type":"Limited","limited":{"nominalConcurrencyShares":5,"limitResponse":{"type":"Reject"},"lendablePercent":0}}, nominalCL=82, lendableCL=0, borrowingCL=4000, currentCL=4000, quiescing=false, numPending=0 (shares=5, shareSum=245) Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429681 132400 apf_controller.go:846] Introducing queues for priority level "global-default": config={"type":"Limited","limited":{"nominalConcurrencyShares":20,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":50}}, nominalCL=327, lendableCL=164, borrowingCL=4000, currentCL=245, quiescing=false (shares=20, shareSum=245) Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429691 132400 apf_controller.go:846] Introducing queues for priority level "leader-election": config={"type":"Limited","limited":{"nominalConcurrencyShares":10,"limitResponse":{"type":"Queue","queuing":{"queues":16,"handSize":4,"queueLengthLimit":50}},"lendablePercent":0}}, nominalCL=164, lendableCL=0, borrowingCL=4000, currentCL=164, quiescing=false (shares=10, shareSum=245) Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429701 132400 apf_controller.go:846] Introducing queues for priority level "node-high": config={"type":"Limited","limited":{"nominalConcurrencyShares":40,"limitResponse":{"type":"Queue","queuing":{"queues":64,"handSize":6,"queueLengthLimit":50}},"lendablePercent":25}}, nominalCL=654, lendableCL=164, borrowingCL=4000, currentCL=572, quiescing=false (shares=40, shareSum=245) Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429734 132400 apf_controller.go:444] "Update CurrentCL" plName="leader-election" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.331974373907979 currentCL=382 backstop=false Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429758 132400 apf_controller.go:444] "Update CurrentCL" plName="node-high" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.331974373907979 currentCL=1143 backstop=false Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429785 132400 apf_controller.go:444] "Update CurrentCL" plName="system" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.331974373907979 currentCL=765 backstop=false Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429818 132400 apf_controller.go:444] "Update CurrentCL" plName="workload-high" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.331974373907979 currentCL=763 backstop=false Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429853 132400 apf_controller.go:444] "Update CurrentCL" plName="workload-low" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.331974373907979 currentCL=380 backstop=false Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429886 132400 apf_controller.go:444] "Update CurrentCL" plName="catch-all" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0.00714964914313856 fairFrac=2.331974373907979 currentCL=191 backstop=false Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.429906 132400 apf_controller.go:444] "Update CurrentCL" plName="global-default" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.331974373907979 currentCL=380 backstop=false Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.434704 132400 patch_genericapiserver.go:126] Loopback request to "/apis/apiregistration.k8s.io/v1/apiservices" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.463857 132400 healthz.go:261] etcd,etcd-readiness,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]etcd failed: etcd client connection not yet established Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]etcd-readiness failed: etcd client connection not yet established Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.474241 132400 patch_genericapiserver.go:126] Loopback request to "/api/v1/pods" (user agent "oc/4.12.0 (linux/amd64) kubernetes/b05f7d4") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.564741 132400 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.664289 132400 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.764153 132400 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.864177 132400 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:17.964081 132400 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:17 localhost.localdomain microshift[132400]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.064475 132400 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz Feb 13 04:05:18 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:18 localhost.localdomain microshift[132400]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.164963 132400 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz Feb 13 04:05:18 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:18 localhost.localdomain microshift[132400]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.170485 132400 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.181226 132400 aggregator.go:237] Updating OpenAPI spec because k8s_internal_local_delegation_chain_0000000002 is updated Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.191418 132400 aggregator.go:240] Finished OpenAPI spec generation after 10.121774ms Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.264845 132400 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz Feb 13 04:05:18 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:18 localhost.localdomain microshift[132400]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished Feb 13 04:05:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:18.286593 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.291407 132400 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz Feb 13 04:05:18 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:18 localhost.localdomain microshift[132400]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.291654 132400 kube-apiserver.go:325] "kube-apiserver" not yet ready: an error on the server ("[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]etcd-readiness ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[+]shutdown ok\nreadyz check failed") has prevented the request from succeeding Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.307323 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.307866 132400 patch_genericapiserver.go:126] Loopback request to "/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.310080 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.310290 132400 patch_genericapiserver.go:126] Loopback request to "/apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.313220 132400 storage_scheduling.go:111] all system priority classes are created successfully or already exist. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.314522 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.315864 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.316543 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.317321 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.318019 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.318771 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.319439 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.321000 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:monitoring" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.321776 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:public-info-viewer" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.322450 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.323259 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.324646 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.325507 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.326223 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/view" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.328212 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.329041 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.330874 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.335957 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.336868 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.337910 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.338551 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.339203 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.342082 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.343093 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.343865 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.344652 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.345302 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.346116 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.346757 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.347439 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.349449 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:legacy-unknown-approver" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.350298 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kubelet-serving-approver" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.350933 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.351484 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.352110 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:service-account-issuer-discovery" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.352680 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.353240 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.354014 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-admin" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.354567 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-reader" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.355123 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.355785 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.356429 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.357061 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.357624 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.358265 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.358913 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.359463 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.360020 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslicemirroring-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.360625 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.361180 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ephemeral-volume-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.361732 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.362239 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.362885 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.363502 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.364052 132400 healthz.go:261] poststarthook/rbac/bootstrap-roles check failed: readyz Feb 13 04:05:18 localhost.localdomain microshift[132400]: [-]poststarthook/rbac/bootstrap-roles failed: not finished Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.364110 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.364672 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.365221 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.365773 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.366305 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.366864 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.367376 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.367928 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.368396 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.368915 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.369442 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.369967 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.370504 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.371036 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.371518 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-after-finished-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.372026 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:root-ca-cert-publisher" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.372503 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-ca-cert-publisher" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.373034 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.373595 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:monitoring" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.374133 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.374685 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.375164 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.375655 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:public-info-viewer" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.376168 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.376710 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.377208 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.377781 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.378385 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.378908 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.379411 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:service-account-issuer-discovery" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.379917 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-admin" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.380394 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.380914 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.381386 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.381898 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.382351 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.382840 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.383301 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.383803 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.384257 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslicemirroring-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.384790 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.385268 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ephemeral-volume-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.385774 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.386215 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.386741 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.387193 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.387679 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.388297 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.388933 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.389536 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.390065 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.390578 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.391109 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.391595 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.392100 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.392585 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.393102 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.393615 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.394112 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.394694 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.395470 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-after-finished-controller" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.396225 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:root-ca-cert-publisher" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.396814 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-ca-cert-publisher" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.397369 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.398017 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.398645 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.399140 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.399754 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.400315 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.400882 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.401433 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.402018 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.402518 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.403038 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.403521 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.404093 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.404602 132400 patch_genericapiserver.go:126] Loopback request to "/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer" (user agent "microshift/v1.26.0 (linux/amd64) kubernetes/9eb81c2") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. Feb 13 04:05:18 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:18.464359 132400 genericapiserver.go:978] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"openshift-kube-apiserver", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'KubeAPIReadyz' readyz=true Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:19.292931 132400 kube-apiserver.go:339] "kube-apiserver" is ready Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.293076 132400 manager.go:114] Starting kube-scheduler Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.293680 132400 manager.go:114] Starting kube-controller-manager Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294620 132400 core.go:170] Applying corev1 api controllers/kube-controller-manager/namespace-openshift-kube-controller-manager.yaml Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294938 132400 flags.go:64] FLAG: --allocate-node-cidrs="true" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294956 132400 flags.go:64] FLAG: --allow-metric-labels="[]" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294960 132400 flags.go:64] FLAG: --allow-untagged-cloud="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294962 132400 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294965 132400 flags.go:64] FLAG: --authentication-kubeconfig="/var/lib/microshift/resources/kube-controller-manager/kubeconfig" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294968 132400 flags.go:64] FLAG: --authentication-skip-lookup="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294969 132400 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294972 132400 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294975 132400 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294978 132400 flags.go:64] FLAG: --authorization-kubeconfig="/var/lib/microshift/resources/kube-controller-manager/kubeconfig" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294981 132400 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294983 132400 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294985 132400 flags.go:64] FLAG: --bind-address="127.0.0.1" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294987 132400 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294989 132400 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294992 132400 flags.go:64] FLAG: --client-ca-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294993 132400 flags.go:64] FLAG: --cloud-config="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294995 132400 flags.go:64] FLAG: --cloud-provider="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294997 132400 flags.go:64] FLAG: --cluster-cidr="10.42.0.0/16" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.294999 132400 flags.go:64] FLAG: --cluster-name="kubernetes" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295001 132400 flags.go:64] FLAG: --cluster-signing-cert-file="/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295004 132400 flags.go:64] FLAG: --cluster-signing-duration="720h0m0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295006 132400 flags.go:64] FLAG: --cluster-signing-key-file="/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295008 132400 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295010 132400 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295017 132400 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295019 132400 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295021 132400 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295023 132400 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295025 132400 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295027 132400 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295028 132400 flags.go:64] FLAG: --concurrent-deployment-syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295032 132400 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295034 132400 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295036 132400 flags.go:64] FLAG: --concurrent-gc-syncs="20" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295038 132400 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295040 132400 flags.go:64] FLAG: --concurrent-namespace-syncs="10" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295042 132400 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295044 132400 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295046 132400 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295048 132400 flags.go:64] FLAG: --concurrent-service-syncs="1" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295054 132400 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295056 132400 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295057 132400 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295059 132400 flags.go:64] FLAG: --concurrent_rc_syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295061 132400 flags.go:64] FLAG: --configure-cloud-routes="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295063 132400 flags.go:64] FLAG: --contention-profiling="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295065 132400 flags.go:64] FLAG: --controller-start-interval="0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295067 132400 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295070 132400 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295072 132400 flags.go:64] FLAG: --disabled-metrics="[]" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295075 132400 flags.go:64] FLAG: --enable-dynamic-provisioning="true" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295077 132400 flags.go:64] FLAG: --enable-garbage-collector="true" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295078 132400 flags.go:64] FLAG: --enable-hostpath-provisioner="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295080 132400 flags.go:64] FLAG: --enable-leader-migration="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295082 132400 flags.go:64] FLAG: --enable-taint-manager="true" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295084 132400 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295091 132400 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295093 132400 flags.go:64] FLAG: --external-cloud-volume-plugin="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295096 132400 flags.go:64] FLAG: --feature-gates="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295098 132400 flags.go:64] FLAG: --flex-volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295101 132400 flags.go:64] FLAG: --help="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295103 132400 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295106 132400 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295108 132400 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295109 132400 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295111 132400 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295113 132400 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295116 132400 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295118 132400 flags.go:64] FLAG: --http2-max-streams-per-connection="0" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295120 132400 flags.go:64] FLAG: --kube-api-burst="300" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295122 132400 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295124 132400 flags.go:64] FLAG: --kube-api-qps="150" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295130 132400 flags.go:64] FLAG: --kubeconfig="/var/lib/microshift/resources/kube-controller-manager/kubeconfig" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295133 132400 flags.go:64] FLAG: --large-cluster-size-threshold="50" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295135 132400 flags.go:64] FLAG: --leader-elect="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295136 132400 flags.go:64] FLAG: --leader-elect-lease-duration="15s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295138 132400 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295140 132400 flags.go:64] FLAG: --leader-elect-resource-lock="configmapsleases" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295142 132400 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295144 132400 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295146 132400 flags.go:64] FLAG: --leader-elect-retry-period="3s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295148 132400 flags.go:64] FLAG: --leader-migration-config="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295150 132400 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295152 132400 flags.go:64] FLAG: --logging-format="text" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295154 132400 flags.go:64] FLAG: --master="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295156 132400 flags.go:64] FLAG: --max-endpoints-per-slice="100" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295157 132400 flags.go:64] FLAG: --min-resync-period="12h0m0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295160 132400 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295166 132400 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295169 132400 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295170 132400 flags.go:64] FLAG: --namespace-sync-period="5m0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295172 132400 flags.go:64] FLAG: --node-cidr-mask-size="0" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295175 132400 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295176 132400 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295178 132400 flags.go:64] FLAG: --node-eviction-rate="0.1" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295180 132400 flags.go:64] FLAG: --node-monitor-grace-period="40s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295182 132400 flags.go:64] FLAG: --node-monitor-period="5s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295184 132400 flags.go:64] FLAG: --node-startup-grace-period="1m0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295186 132400 flags.go:64] FLAG: --node-sync-period="0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295188 132400 flags.go:64] FLAG: --openshift-config="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295190 132400 flags.go:64] FLAG: --permit-address-sharing="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295192 132400 flags.go:64] FLAG: --permit-port-sharing="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295194 132400 flags.go:64] FLAG: --pod-eviction-timeout="5m0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295196 132400 flags.go:64] FLAG: --profiling="true" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295202 132400 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295204 132400 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295205 132400 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295207 132400 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295209 132400 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295211 132400 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295213 132400 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295216 132400 flags.go:64] FLAG: --requestheader-allowed-names="[]" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295218 132400 flags.go:64] FLAG: --requestheader-client-ca-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295221 132400 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295223 132400 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295226 132400 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295228 132400 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295230 132400 flags.go:64] FLAG: --root-ca-file="/var/lib/microshift/certs/ca-bundle/service-account-token-ca.crt" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295233 132400 flags.go:64] FLAG: --route-reconciliation-period="10s" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295235 132400 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295241 132400 flags.go:64] FLAG: --secure-port="10257" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295243 132400 flags.go:64] FLAG: --service-account-private-key-file="/var/lib/microshift/resources/kube-apiserver/secrets/service-account-key/service-account.key" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295246 132400 flags.go:64] FLAG: --service-cluster-ip-range="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295247 132400 flags.go:64] FLAG: --show-hidden-metrics-for-version="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295249 132400 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295252 132400 flags.go:64] FLAG: --tls-cert-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295253 132400 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295256 132400 flags.go:64] FLAG: --tls-min-version="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295258 132400 flags.go:64] FLAG: --tls-private-key-file="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295260 132400 flags.go:64] FLAG: --tls-sni-cert-key="[]" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295263 132400 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295265 132400 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295267 132400 flags.go:64] FLAG: --use-service-account-credentials="true" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295269 132400 flags.go:64] FLAG: --v="2" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295271 132400 flags.go:64] FLAG: --version="false" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295274 132400 flags.go:64] FLAG: --vmodule="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295277 132400 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.295279 132400 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.296026 132400 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/kubernetes/kube-controller-manager.crt::/var/run/kubernetes/kube-controller-manager.key" Feb 13 04:05:19 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:19.310884 132400 manager.go:114] Starting openshift-crd-manager Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.411959 132400 core.go:170] Applying corev1 api core/namespace-openshift-infra.yaml Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.540602 132400 serving.go:348] Generated self-signed cert in-memory Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.541189 132400 configfile.go:59] "KubeSchedulerConfiguration v1beta3 is deprecated in v1.26, will be removed in v1.29" Feb 13 04:05:19 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:19.558027 132400 crd.go:155] Applying openshift CRD crd/0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.558992 132400 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.559622 132400 controllermanager.go:196] Version: v1.26.0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.559648 132400 controllermanager.go:198] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.561364 132400 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/kubernetes/kube-controller-manager.crt::/var/run/kubernetes/kube-controller-manager.key" certDetail="\"localhost@1676270910\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1676270910\" (2023-02-13 05:48:30 +0000 UTC to 2024-02-13 05:48:30 +0000 UTC (now=2023-02-13 09:05:19.561351806 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.561429 132400 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1676279119\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1676279119\" (2023-02-13 08:05:19 +0000 UTC to 2024-02-13 08:05:19 +0000 UTC (now=2023-02-13 09:05:19.561420212 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.561439 132400 secure_serving.go:210] Serving securely on 127.0.0.1:10257 Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.561626 132400 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.561634 132400 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.561665 132400 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/kubernetes/kube-controller-manager.crt::/var/run/kubernetes/kube-controller-manager.key" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.561701 132400 tlsconfig.go:240] "Starting DynamicServingCertificateController" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.561720 132400 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.561725 132400 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.561732 132400 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.561735 132400 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.574499 132400 controllermanager.go:643] Starting "podgc" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.574555 132400 shared_informer.go:273] Waiting for caches to sync for tokens Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.576768 132400 controllermanager.go:672] Started "podgc" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.576777 132400 controllermanager.go:643] Starting "garbagecollector" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.576844 132400 gc_controller.go:102] Starting GC controller Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.576850 132400 shared_informer.go:273] Waiting for caches to sync for GC Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.578726 132400 controllermanager.go:672] Started "garbagecollector" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.578734 132400 controllermanager.go:643] Starting "daemonset" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.578792 132400 garbagecollector.go:154] Starting garbage collector controller Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.578797 132400 shared_informer.go:273] Waiting for caches to sync for garbage collector Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.578849 132400 graph_builder.go:291] GraphBuilder running Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.579891 132400 controllermanager.go:672] Started "daemonset" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.579900 132400 controllermanager.go:643] Starting "replicaset" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.579941 132400 daemon_controller.go:271] Starting daemon sets controller Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.579944 132400 shared_informer.go:273] Waiting for caches to sync for daemon sets Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.581103 132400 controllermanager.go:672] Started "replicaset" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.581113 132400 controllermanager.go:643] Starting "service" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.582496 132400 replica_set.go:201] Starting replicaset controller Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.582504 132400 shared_informer.go:273] Waiting for caches to sync for ReplicaSet Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager E0213 04:05:19.585077 132400 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager W0213 04:05:19.585087 132400 controllermanager.go:650] Skipping "service" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.585091 132400 controllermanager.go:643] Starting "pvc-protection" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.587240 132400 controllermanager.go:672] Started "pvc-protection" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.587249 132400 controllermanager.go:643] Starting "horizontalpodautoscaling" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.590371 132400 pvc_protection_controller.go:99] "Starting PVC protection controller" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.590381 132400 shared_informer.go:273] Waiting for caches to sync for PVC protection Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.591086 132400 garbagecollector.go:220] syncing garbage collector with updated resources from discovery (attempt 1): added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1beta3, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta3, Resource=prioritylevelconfigurations networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes scheduling.k8s.io/v1, Resource=priorityclasses security.internal.openshift.io/v1, Resource=rangeallocations security.openshift.io/v1, Resource=securitycontextconstraints storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=csistoragecapacities storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments topolvm.io/v1, Resource=logicalvolumes], removed: [] Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.593340 132400 controllermanager.go:672] Started "horizontalpodautoscaling" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager W0213 04:05:19.593349 132400 controllermanager.go:637] "tokencleaner" is disabled Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.593353 132400 controllermanager.go:643] Starting "nodeipam" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.593391 132400 horizontal.go:181] Starting HPA controller Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.593394 132400 shared_informer.go:273] Waiting for caches to sync for HPA Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662304 132400 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662343 132400 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662394 132400 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662482 132400 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"aggregator-signer\" [] issuer=\"\" (2023-02-13 06:24:30 +0000 UTC to 2024-02-13 06:24:31 +0000 UTC (now=2023-02-13 09:05:19.662467235 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662567 132400 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/kubernetes/kube-controller-manager.crt::/var/run/kubernetes/kube-controller-manager.key" certDetail="\"localhost@1676270910\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1676270910\" (2023-02-13 05:48:30 +0000 UTC to 2024-02-13 05:48:30 +0000 UTC (now=2023-02-13 09:05:19.662557426 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662630 132400 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1676279119\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1676279119\" (2023-02-13 08:05:19 +0000 UTC to 2024-02-13 08:05:19 +0000 UTC (now=2023-02-13 09:05:19.662621134 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662754 132400 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-control-plane-signer\" [] issuer=\"\" (2023-02-13 06:24:28 +0000 UTC to 2024-02-13 06:24:29 +0000 UTC (now=2023-02-13 09:05:19.662747458 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662767 132400 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-apiserver-to-kubelet-signer\" [] issuer=\"\" (2023-02-13 06:24:29 +0000 UTC to 2024-02-13 06:24:30 +0000 UTC (now=2023-02-13 09:05:19.662761232 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662779 132400 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2023-02-13 06:24:29 +0000 UTC to 2033-02-10 06:24:30 +0000 UTC (now=2023-02-13 09:05:19.662771467 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662789 132400 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-signer\" [] issuer=\"\" (2023-02-13 06:24:29 +0000 UTC to 2024-02-13 06:24:30 +0000 UTC (now=2023-02-13 09:05:19.662783158 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662798 132400 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer\" [] issuer=\"kubelet-signer\" (2023-02-13 06:24:29 +0000 UTC to 2024-02-13 06:24:30 +0000 UTC (now=2023-02-13 09:05:19.662793175 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662808 132400 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"aggregator-signer\" [] issuer=\"\" (2023-02-13 06:24:30 +0000 UTC to 2024-02-13 06:24:31 +0000 UTC (now=2023-02-13 09:05:19.662802667 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662857 132400 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/kubernetes/kube-controller-manager.crt::/var/run/kubernetes/kube-controller-manager.key" certDetail="\"localhost@1676270910\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1676270910\" (2023-02-13 05:48:30 +0000 UTC to 2024-02-13 05:48:30 +0000 UTC (now=2023-02-13 09:05:19.662850392 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.662904 132400 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1676279119\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1676279119\" (2023-02-13 08:05:19 +0000 UTC to 2024-02-13 08:05:19 +0000 UTC (now=2023-02-13 09:05:19.662898622 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:19.675239 132400 shared_informer.go:280] Caches are synced for tokens Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.708631 132400 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.714281 132400 configfile.go:105] "Using component config" config=< Feb 13 04:05:19 localhost.localdomain microshift[132400]: apiVersion: kubescheduler.config.k8s.io/v1beta3 Feb 13 04:05:19 localhost.localdomain microshift[132400]: clientConnection: Feb 13 04:05:19 localhost.localdomain microshift[132400]: acceptContentTypes: "" Feb 13 04:05:19 localhost.localdomain microshift[132400]: burst: 100 Feb 13 04:05:19 localhost.localdomain microshift[132400]: contentType: application/vnd.kubernetes.protobuf Feb 13 04:05:19 localhost.localdomain microshift[132400]: kubeconfig: /var/lib/microshift/resources/kube-scheduler/kubeconfig Feb 13 04:05:19 localhost.localdomain microshift[132400]: qps: 50 Feb 13 04:05:19 localhost.localdomain microshift[132400]: enableContentionProfiling: true Feb 13 04:05:19 localhost.localdomain microshift[132400]: enableProfiling: true Feb 13 04:05:19 localhost.localdomain microshift[132400]: kind: KubeSchedulerConfiguration Feb 13 04:05:19 localhost.localdomain microshift[132400]: leaderElection: Feb 13 04:05:19 localhost.localdomain microshift[132400]: leaderElect: false Feb 13 04:05:19 localhost.localdomain microshift[132400]: leaseDuration: 15s Feb 13 04:05:19 localhost.localdomain microshift[132400]: renewDeadline: 10s Feb 13 04:05:19 localhost.localdomain microshift[132400]: resourceLock: leases Feb 13 04:05:19 localhost.localdomain microshift[132400]: resourceName: kube-scheduler Feb 13 04:05:19 localhost.localdomain microshift[132400]: resourceNamespace: kube-system Feb 13 04:05:19 localhost.localdomain microshift[132400]: retryPeriod: 2s Feb 13 04:05:19 localhost.localdomain microshift[132400]: parallelism: 16 Feb 13 04:05:19 localhost.localdomain microshift[132400]: percentageOfNodesToScore: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: podInitialBackoffSeconds: 1 Feb 13 04:05:19 localhost.localdomain microshift[132400]: podMaxBackoffSeconds: 10 Feb 13 04:05:19 localhost.localdomain microshift[132400]: profiles: Feb 13 04:05:19 localhost.localdomain microshift[132400]: - pluginConfig: Feb 13 04:05:19 localhost.localdomain microshift[132400]: - args: Feb 13 04:05:19 localhost.localdomain microshift[132400]: apiVersion: kubescheduler.config.k8s.io/v1beta3 Feb 13 04:05:19 localhost.localdomain microshift[132400]: kind: DefaultPreemptionArgs Feb 13 04:05:19 localhost.localdomain microshift[132400]: minCandidateNodesAbsolute: 100 Feb 13 04:05:19 localhost.localdomain microshift[132400]: minCandidateNodesPercentage: 10 Feb 13 04:05:19 localhost.localdomain microshift[132400]: name: DefaultPreemption Feb 13 04:05:19 localhost.localdomain microshift[132400]: - args: Feb 13 04:05:19 localhost.localdomain microshift[132400]: apiVersion: kubescheduler.config.k8s.io/v1beta3 Feb 13 04:05:19 localhost.localdomain microshift[132400]: hardPodAffinityWeight: 1 Feb 13 04:05:19 localhost.localdomain microshift[132400]: kind: InterPodAffinityArgs Feb 13 04:05:19 localhost.localdomain microshift[132400]: name: InterPodAffinity Feb 13 04:05:19 localhost.localdomain microshift[132400]: - args: Feb 13 04:05:19 localhost.localdomain microshift[132400]: apiVersion: kubescheduler.config.k8s.io/v1beta3 Feb 13 04:05:19 localhost.localdomain microshift[132400]: kind: NodeAffinityArgs Feb 13 04:05:19 localhost.localdomain microshift[132400]: name: NodeAffinity Feb 13 04:05:19 localhost.localdomain microshift[132400]: - args: Feb 13 04:05:19 localhost.localdomain microshift[132400]: apiVersion: kubescheduler.config.k8s.io/v1beta3 Feb 13 04:05:19 localhost.localdomain microshift[132400]: kind: NodeResourcesBalancedAllocationArgs Feb 13 04:05:19 localhost.localdomain microshift[132400]: resources: Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: cpu Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 1 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: memory Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 1 Feb 13 04:05:19 localhost.localdomain microshift[132400]: name: NodeResourcesBalancedAllocation Feb 13 04:05:19 localhost.localdomain microshift[132400]: - args: Feb 13 04:05:19 localhost.localdomain microshift[132400]: apiVersion: kubescheduler.config.k8s.io/v1beta3 Feb 13 04:05:19 localhost.localdomain microshift[132400]: kind: NodeResourcesFitArgs Feb 13 04:05:19 localhost.localdomain microshift[132400]: scoringStrategy: Feb 13 04:05:19 localhost.localdomain microshift[132400]: resources: Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: cpu Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 1 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: memory Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 1 Feb 13 04:05:19 localhost.localdomain microshift[132400]: type: LeastAllocated Feb 13 04:05:19 localhost.localdomain microshift[132400]: name: NodeResourcesFit Feb 13 04:05:19 localhost.localdomain microshift[132400]: - args: Feb 13 04:05:19 localhost.localdomain microshift[132400]: apiVersion: kubescheduler.config.k8s.io/v1beta3 Feb 13 04:05:19 localhost.localdomain microshift[132400]: defaultingType: System Feb 13 04:05:19 localhost.localdomain microshift[132400]: kind: PodTopologySpreadArgs Feb 13 04:05:19 localhost.localdomain microshift[132400]: name: PodTopologySpread Feb 13 04:05:19 localhost.localdomain microshift[132400]: - args: Feb 13 04:05:19 localhost.localdomain microshift[132400]: apiVersion: kubescheduler.config.k8s.io/v1beta3 Feb 13 04:05:19 localhost.localdomain microshift[132400]: bindTimeoutSeconds: 600 Feb 13 04:05:19 localhost.localdomain microshift[132400]: kind: VolumeBindingArgs Feb 13 04:05:19 localhost.localdomain microshift[132400]: name: VolumeBinding Feb 13 04:05:19 localhost.localdomain microshift[132400]: plugins: Feb 13 04:05:19 localhost.localdomain microshift[132400]: bind: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: filter: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: multiPoint: Feb 13 04:05:19 localhost.localdomain microshift[132400]: enabled: Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: PrioritySort Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: NodeUnschedulable Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: NodeName Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: TaintToleration Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 3 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: NodeAffinity Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 2 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: NodePorts Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: NodeResourcesFit Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 1 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: VolumeRestrictions Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: EBSLimits Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: GCEPDLimits Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: NodeVolumeLimits Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: AzureDiskLimits Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: VolumeBinding Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: VolumeZone Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: PodTopologySpread Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 2 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: InterPodAffinity Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 2 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: DefaultPreemption Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: NodeResourcesBalancedAllocation Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 1 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: ImageLocality Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 1 Feb 13 04:05:19 localhost.localdomain microshift[132400]: - name: DefaultBinder Feb 13 04:05:19 localhost.localdomain microshift[132400]: weight: 0 Feb 13 04:05:19 localhost.localdomain microshift[132400]: permit: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: postBind: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: postFilter: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: preBind: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: preEnqueue: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: preFilter: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: preScore: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: queueSort: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: reserve: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: score: {} Feb 13 04:05:19 localhost.localdomain microshift[132400]: schedulerName: default-scheduler Feb 13 04:05:19 localhost.localdomain microshift[132400]: > Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.715598 132400 server.go:156] "Starting Kubernetes Scheduler" version="v1.26.0" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.715652 132400 server.go:158] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.717743 132400 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1676279119\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1676279119\" (2023-02-13 08:05:19 +0000 UTC to 2024-02-13 08:05:19 +0000 UTC (now=2023-02-13 09:05:19.717729905 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.717953 132400 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1676279119\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1676279119\" (2023-02-13 08:05:19 +0000 UTC to 2024-02-13 08:05:19 +0000 UTC (now=2023-02-13 09:05:19.717937657 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.718005 132400 secure_serving.go:210] Serving securely on [::]:10259 Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.717744 132400 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.718071 132400 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.718179 132400 tlsconfig.go:240] "Starting DynamicServingCertificateController" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.717752 132400 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.718541 132400 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.717756 132400 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.718582 132400 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.723744 132400 node_tree.go:65] "Added node in listed group to NodeTree" node="localhost.localdomain" zone="" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.818880 132400 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.818940 132400 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819017 132400 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819244 132400 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"aggregator-signer\" [] issuer=\"\" (2023-02-13 06:24:30 +0000 UTC to 2024-02-13 06:24:31 +0000 UTC (now=2023-02-13 09:05:19.819226548 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819358 132400 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1676279119\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1676279119\" (2023-02-13 08:05:19 +0000 UTC to 2024-02-13 08:05:19 +0000 UTC (now=2023-02-13 09:05:19.819350225 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819435 132400 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1676279119\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1676279119\" (2023-02-13 08:05:19 +0000 UTC to 2024-02-13 08:05:19 +0000 UTC (now=2023-02-13 09:05:19.819423497 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819510 132400 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-control-plane-signer\" [] issuer=\"\" (2023-02-13 06:24:28 +0000 UTC to 2024-02-13 06:24:29 +0000 UTC (now=2023-02-13 09:05:19.819502042 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819546 132400 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-apiserver-to-kubelet-signer\" [] issuer=\"\" (2023-02-13 06:24:29 +0000 UTC to 2024-02-13 06:24:30 +0000 UTC (now=2023-02-13 09:05:19.819539831 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819576 132400 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2023-02-13 06:24:29 +0000 UTC to 2033-02-10 06:24:30 +0000 UTC (now=2023-02-13 09:05:19.819570165 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819610 132400 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-signer\" [] issuer=\"\" (2023-02-13 06:24:29 +0000 UTC to 2024-02-13 06:24:30 +0000 UTC (now=2023-02-13 09:05:19.819596623 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819639 132400 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer\" [] issuer=\"kubelet-signer\" (2023-02-13 06:24:29 +0000 UTC to 2024-02-13 06:24:30 +0000 UTC (now=2023-02-13 09:05:19.819632175 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819723 132400 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"aggregator-signer\" [] issuer=\"\" (2023-02-13 06:24:30 +0000 UTC to 2024-02-13 06:24:31 +0000 UTC (now=2023-02-13 09:05:19.819712565 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819822 132400 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1676279119\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1676279119\" (2023-02-13 08:05:19 +0000 UTC to 2024-02-13 08:05:19 +0000 UTC (now=2023-02-13 09:05:19.819814902 +0000 UTC))" Feb 13 04:05:19 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:19.819904 132400 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1676279119\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1676279119\" (2023-02-13 08:05:19 +0000 UTC to 2024-02-13 08:05:19 +0000 UTC (now=2023-02-13 09:05:19.819895767 +0000 UTC))" Feb 13 04:05:22 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:22.330032 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:05:22 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:22.330313 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:05:22 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:22.331792 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:22 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:22.331894 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:22 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:22.340177 132400 store.go:1482] "Monitoring resource count at path" resource="securitycontextconstraints.security.openshift.io" path="//security.openshift.io/securitycontextconstraints" Feb 13 04:05:22 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:22.342888 132400 cacher.go:435] cacher (securitycontextconstraints.security.openshift.io): initialized Feb 13 04:05:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:23.286520 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:05:23 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:23.422723 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:23 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:23.422762 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:23 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:23.555620 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:05:23 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:23.555645 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:05:24 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:24.297374 132400 kube-controller-manager.go:126] kube-controller-manager is ready Feb 13 04:05:24 localhost.localdomain microshift[132400]: kube-scheduler I0213 04:05:24.313725 132400 kube-scheduler.go:89] kube-scheduler is ready Feb 13 04:05:24 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:24.560763 132400 crd.go:166] Applied openshift CRD crd/0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml Feb 13 04:05:24 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:24.561062 132400 crd.go:155] Applying openshift CRD crd/0000_03_security-openshift_01_scc.crd.yaml Feb 13 04:05:25 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:25.511586 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:25 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:25.511654 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:26 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:26.124796 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:05:26 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:26.124841 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:05:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:28.286190 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:05:29 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:29.567969 132400 crd.go:166] Applied openshift CRD crd/0000_03_security-openshift_01_scc.crd.yaml Feb 13 04:05:29 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:29.568008 132400 crd.go:155] Applying openshift CRD crd/route.crd.yaml Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.599480 132400 range_allocator.go:103] No Service CIDR provided. Skipping filtering out service addresses. Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.599631 132400 range_allocator.go:109] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.599722 132400 controllermanager.go:672] Started "nodeipam" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.599749 132400 controllermanager.go:643] Starting "attachdetach" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.599761 132400 node_ipam_controller.go:155] Starting ipam controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.600128 132400 shared_informer.go:273] Waiting for caches to sync for node Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager W0213 04:05:29.600666 132400 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager E0213 04:05:29.600687 132400 plugins.go:616] "Error initializing dynamic plugin prober" err="error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.600699 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.600704 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.600712 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.600736 132400 controllermanager.go:672] Started "attachdetach" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.600741 132400 controllermanager.go:643] Starting "persistentvolume-expander" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.600820 132400 attach_detach_controller.go:328] Starting attach detach controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.600846 132400 shared_informer.go:273] Waiting for caches to sync for attach detach Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.601571 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.601614 132400 controllermanager.go:672] Started "persistentvolume-expander" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.601618 132400 controllermanager.go:643] Starting "root-ca-cert-publisher" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.601674 132400 expand_controller.go:340] Starting expand controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.601697 132400 shared_informer.go:273] Waiting for caches to sync for expand Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.602412 132400 controllermanager.go:672] Started "root-ca-cert-publisher" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.602453 132400 controllermanager.go:643] Starting "nodelifecycle" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.602520 132400 publisher.go:101] Starting root CA certificate configmap publisher Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.602542 132400 shared_informer.go:273] Waiting for caches to sync for crt configmap Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.603210 132400 node_lifecycle_controller.go:492] Controller will reconcile labels. Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.603261 132400 controllermanager.go:672] Started "nodelifecycle" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.603282 132400 controllermanager.go:643] Starting "ttl-after-finished" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.603358 132400 node_lifecycle_controller.go:527] Sending events to api server. Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.603391 132400 node_lifecycle_controller.go:538] Starting node controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.603408 132400 shared_informer.go:273] Waiting for caches to sync for taint Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.604039 132400 controllermanager.go:672] Started "ttl-after-finished" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.604077 132400 controllermanager.go:643] Starting "endpoint" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.604144 132400 ttlafterfinished_controller.go:104] Starting TTL after finished controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.604173 132400 shared_informer.go:273] Waiting for caches to sync for TTL after finished Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.604834 132400 controllermanager.go:672] Started "endpoint" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.604878 132400 controllermanager.go:643] Starting "endpointslice" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.604964 132400 endpoints_controller.go:178] Starting endpoint controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.604988 132400 shared_informer.go:273] Waiting for caches to sync for endpoint Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.605755 132400 controllermanager.go:672] Started "endpointslice" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.605800 132400 controllermanager.go:643] Starting "resourcequota" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.605885 132400 endpointslice_controller.go:257] Starting endpoint slice controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.606133 132400 shared_informer.go:273] Waiting for caches to sync for endpoint_slice Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.614813 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for deployments.apps Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.614838 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.614846 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615031 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for serviceaccounts Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615089 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615116 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for podtemplates Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615145 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for limitranges Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615169 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615197 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for statefulsets.apps Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615232 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for daemonsets.apps Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615257 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for replicasets.apps Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615282 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615304 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpoints Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615327 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for cronjobs.batch Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615348 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for jobs.batch Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615376 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for routes.route.openshift.io Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615403 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615430 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615455 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for leases.coordination.k8s.io Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615479 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for controllerrevisions.apps Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615504 132400 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615536 132400 controllermanager.go:672] Started "resourcequota" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615557 132400 controllermanager.go:643] Starting "serviceaccount" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615575 132400 resource_quota_controller.go:277] Starting resource quota controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615583 132400 shared_informer.go:273] Waiting for caches to sync for resource quota Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.615591 132400 resource_quota_monitor.go:295] QuotaMonitor running Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.616595 132400 controllermanager.go:672] Started "serviceaccount" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.616670 132400 controllermanager.go:643] Starting "job" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.616794 132400 serviceaccounts_controller.go:111] Starting service account controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.616823 132400 shared_informer.go:273] Waiting for caches to sync for service account Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.619855 132400 resource_quota_controller.go:443] syncing resource quota controller with updated resources from discovery: added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes storage.k8s.io/v1, Resource=csistoragecapacities], removed: [] Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.619943 132400 controllermanager.go:672] Started "job" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.619972 132400 controllermanager.go:643] Starting "deployment" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.620013 132400 job_controller.go:191] Starting job controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.620113 132400 shared_informer.go:273] Waiting for caches to sync for job Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.620864 132400 controllermanager.go:672] Started "deployment" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.620913 132400 controllermanager.go:643] Starting "statefulset" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.621000 132400 deployment_controller.go:154] "Starting controller" controller="deployment" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.621026 132400 shared_informer.go:273] Waiting for caches to sync for deployment Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.621775 132400 controllermanager.go:672] Started "statefulset" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.621821 132400 controllermanager.go:643] Starting "csrsigning" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.621899 132400 stateful_set.go:152] Starting stateful set controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.621922 132400 shared_informer.go:273] Waiting for caches to sync for stateful set Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.622792 132400 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.622951 132400 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623056 132400 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623158 132400 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623216 132400 controllermanager.go:672] Started "csrsigning" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623238 132400 controllermanager.go:643] Starting "csrapproving" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623301 132400 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-serving" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623330 132400 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kubelet-serving Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623352 132400 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623370 132400 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kubelet-client Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623390 132400 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623406 132400 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623428 132400 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623446 132400 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-legacy-unknown Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623466 132400 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623523 132400 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623565 132400 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.623602 132400 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.624427 132400 controllermanager.go:672] Started "csrapproving" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.624439 132400 controllermanager.go:643] Starting "csrcleaner" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.624480 132400 certificate_controller.go:112] Starting certificate controller "csrapproving" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.624486 132400 shared_informer.go:273] Waiting for caches to sync for certificate-csrapproving Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625175 132400 controllermanager.go:672] Started "csrcleaner" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager W0213 04:05:29.625184 132400 controllermanager.go:637] "bootstrapsigner" is disabled Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625188 132400 controllermanager.go:643] Starting "route" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625191 132400 core.go:217] Will not configure cloud provider routes for allocate-node-cidrs: true, configure-cloud-routes: false. Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager W0213 04:05:29.625195 132400 controllermanager.go:650] Skipping "route" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625198 132400 controllermanager.go:643] Starting "persistentvolume-binder" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625239 132400 cleaner.go:82] Starting CSR cleaner controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625883 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625894 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625899 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625907 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625926 132400 controllermanager.go:672] Started "persistentvolume-binder" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625929 132400 controllermanager.go:643] Starting "ephemeral-volume" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625989 132400 pv_controller_base.go:318] Starting persistent volume controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.625996 132400 shared_informer.go:273] Waiting for caches to sync for persistent volume Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.626725 132400 controllermanager.go:672] Started "ephemeral-volume" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.626734 132400 controllermanager.go:643] Starting "endpointslicemirroring" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.626786 132400 controller.go:169] Starting ephemeral volume controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.626791 132400 shared_informer.go:273] Waiting for caches to sync for ephemeral Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.627469 132400 controllermanager.go:672] Started "endpointslicemirroring" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.627484 132400 controllermanager.go:643] Starting "disruption" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.627542 132400 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.627548 132400 shared_informer.go:273] Waiting for caches to sync for endpoint_slice_mirroring Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.628849 132400 controllermanager.go:672] Started "disruption" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.628860 132400 controllermanager.go:643] Starting "cloud-node-lifecycle" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.628915 132400 disruption.go:424] Sending events to api server. Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.628936 132400 disruption.go:435] Starting disruption controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.628940 132400 shared_informer.go:273] Waiting for caches to sync for disruption Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager E0213 04:05:29.629598 132400 core.go:207] failed to start cloud node lifecycle controller: no cloud provider provided Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager W0213 04:05:29.629611 132400 controllermanager.go:650] Skipping "cloud-node-lifecycle" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.629616 132400 controllermanager.go:643] Starting "replicationcontroller" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.630364 132400 controllermanager.go:672] Started "replicationcontroller" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager W0213 04:05:29.630371 132400 controllermanager.go:637] "ttl" is disabled Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.630375 132400 controllermanager.go:643] Starting "namespace" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.630433 132400 replica_set.go:201] Starting replicationcontroller controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.630438 132400 shared_informer.go:273] Waiting for caches to sync for ReplicationController Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.638537 132400 controllermanager.go:672] Started "namespace" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.638668 132400 controllermanager.go:643] Starting "cronjob" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.638646 132400 namespace_controller.go:195] Starting namespace controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.638741 132400 shared_informer.go:273] Waiting for caches to sync for namespace Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.639667 132400 controllermanager.go:672] Started "cronjob" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.639727 132400 controllermanager.go:643] Starting "clusterrole-aggregation" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.639706 132400 cronjob_controllerv2.go:137] "Starting cronjob controller v2" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.639795 132400 shared_informer.go:273] Waiting for caches to sync for cronjob Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.640520 132400 controllermanager.go:672] Started "clusterrole-aggregation" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.640530 132400 controllermanager.go:643] Starting "pv-protection" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.640569 132400 clusterroleaggregation_controller.go:188] Starting ClusterRoleAggregator Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.640571 132400 shared_informer.go:273] Waiting for caches to sync for ClusterRoleAggregator Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.641198 132400 controllermanager.go:672] Started "pv-protection" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.641236 132400 controllermanager.go:643] Starting "service-ca-cert-publisher" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.641293 132400 pv_protection_controller.go:75] Starting PV protection controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.641315 132400 shared_informer.go:273] Waiting for caches to sync for PV protection Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.642018 132400 controllermanager.go:672] Started "service-ca-cert-publisher" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.642129 132400 publisher.go:80] Starting service CA certificate configmap publisher Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.642152 132400 shared_informer.go:273] Waiting for caches to sync for crt configmap Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.644874 132400 shared_informer.go:273] Waiting for caches to sync for resource quota Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.661667 132400 shared_informer.go:273] Waiting for caches to sync for garbage collector Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:29.670794 132400 store.go:1482] "Monitoring resource count at path" resource="routes.route.openshift.io" path="//route.openshift.io/routes" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager W0213 04:05:29.671582 132400 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="localhost.localdomain" does not exist Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.671628 132400 topologycache.go:212] Ignoring node localhost.localdomain because it has an excluded label Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.671634 132400 topologycache.go:248] Insufficient node info for topology hints (0 zones, 0 CPU, true) Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.671652 132400 graph_builder.go:660] replacing virtual node [v1/Node, namespace: kube-node-lease, name: localhost.localdomain, uid: 2a8df776-fc9e-4251-80cd-40da01e341a1] with observed node [v1/Node, namespace: , name: localhost.localdomain, uid: 2a8df776-fc9e-4251-80cd-40da01e341a1] Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:29.672270 132400 cacher.go:435] cacher (routes.route.openshift.io): initialized Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.677212 132400 shared_informer.go:280] Caches are synced for GC Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.681230 132400 shared_informer.go:280] Caches are synced for daemon sets Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.681240 132400 shared_informer.go:273] Waiting for caches to sync for daemon sets Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.681244 132400 shared_informer.go:280] Caches are synced for daemon sets Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.683847 132400 shared_informer.go:280] Caches are synced for ReplicaSet Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:29.684554 132400 store.go:1482] "Monitoring resource count at path" resource="logicalvolumes.topolvm.io" path="//topolvm.io/logicalvolumes" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:29.685471 132400 cacher.go:435] cacher (logicalvolumes.topolvm.io): initialized Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.690458 132400 shared_informer.go:280] Caches are synced for PVC protection Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.693532 132400 shared_informer.go:280] Caches are synced for HPA Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:29.695710 132400 store.go:1482] "Monitoring resource count at path" resource="rangeallocations.security.internal.openshift.io" path="//security.internal.openshift.io/rangeallocations" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:29.696226 132400 cacher.go:435] cacher (rangeallocations.security.internal.openshift.io): initialized Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.724889 132400 shared_informer.go:280] Caches are synced for expand Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725126 132400 shared_informer.go:280] Caches are synced for taint Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725288 132400 node_lifecycle_controller.go:810] Controller observed a new Node: "localhost.localdomain" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725320 132400 controller_utils.go:168] "Recording event message for node" event="Registered Node localhost.localdomain in Controller" node="localhost.localdomain" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725351 132400 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager W0213 04:05:29.725395 132400 node_lifecycle_controller.go:1053] Missing timestamp for Node localhost.localdomain. Assuming now as a timestamp. Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725426 132400 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal. Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725146 132400 shared_informer.go:280] Caches are synced for endpoint Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725698 132400 taint_manager.go:206] "Starting NoExecuteTaintManager" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725733 132400 taint_manager.go:211] "Sending events to api server" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725861 132400 event.go:294] "Event occurred" object="localhost.localdomain" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node localhost.localdomain event: Registered Node localhost.localdomain in Controller" Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725156 132400 shared_informer.go:280] Caches are synced for service account Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.726078 132400 shared_informer.go:280] Caches are synced for persistent volume Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725176 132400 shared_informer.go:280] Caches are synced for deployment Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.727057 132400 shared_informer.go:280] Caches are synced for ephemeral Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725183 132400 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-serving Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725190 132400 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-client Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725195 132400 shared_informer.go:280] Caches are synced for certificate-csrsigning-kube-apiserver-client Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725201 132400 shared_informer.go:280] Caches are synced for certificate-csrsigning-legacy-unknown Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725213 132400 shared_informer.go:280] Caches are synced for certificate-csrapproving Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.725220 132400 shared_informer.go:280] Caches are synced for attach detach Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.727880 132400 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.727930 132400 endpointslicemirroring_controller.go:218] Starting 5 worker threads Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.728980 132400 shared_informer.go:280] Caches are synced for disruption Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.729299 132400 shared_informer.go:280] Caches are synced for node Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.729364 132400 range_allocator.go:167] Sending events to api server. Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.729405 132400 range_allocator.go:171] Starting range CIDR allocator Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.729426 132400 shared_informer.go:273] Waiting for caches to sync for cidrallocator Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.729451 132400 shared_informer.go:280] Caches are synced for cidrallocator Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.729484 132400 shared_informer.go:280] Caches are synced for crt configmap Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.729630 132400 shared_informer.go:280] Caches are synced for TTL after finished Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.729688 132400 shared_informer.go:280] Caches are synced for endpoint_slice Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.729961 132400 shared_informer.go:280] Caches are synced for stateful set Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.730019 132400 shared_informer.go:280] Caches are synced for resource quota Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.730047 132400 shared_informer.go:280] Caches are synced for job Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.730463 132400 shared_informer.go:280] Caches are synced for ReplicationController Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.739175 132400 shared_informer.go:280] Caches are synced for namespace Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.740417 132400 shared_informer.go:280] Caches are synced for cronjob Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.741510 132400 shared_informer.go:280] Caches are synced for PV protection Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.741570 132400 shared_informer.go:280] Caches are synced for ClusterRoleAggregator Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.742518 132400 shared_informer.go:280] Caches are synced for crt configmap Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.745723 132400 shared_informer.go:280] Caches are synced for resource quota Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.745732 132400 resource_quota_controller.go:462] synced quota controller Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.762084 132400 shared_informer.go:280] Caches are synced for garbage collector Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.762093 132400 garbagecollector.go:263] synced garbage collector Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.779302 132400 shared_informer.go:280] Caches are synced for garbage collector Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.779313 132400 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.779371 132400 garbagecollector.go:501] "Processing object" object="openshift-storage/topolvm-controller" objectUID=7e68807e-0f97-4e47-ad8e-9776477eda7e kind="Deployment" virtual=false Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.779515 132400 garbagecollector.go:501] "Processing object" object="openshift-dns/dns-default" objectUID=6dff73e2-7d97-46b5-9736-9e140b949edb kind="DaemonSet" virtual=false Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.779550 132400 garbagecollector.go:501] "Processing object" object="openshift-storage/topolvm-node" objectUID=8b198a25-2b8c-4bef-8a13-9af10c5c1b48 kind="DaemonSet" virtual=false Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.779618 132400 garbagecollector.go:501] "Processing object" object="localhost.localdomain" objectUID=70bb6f13-9d74-43e1-88b9-ae306e6a4400 kind="CSINode" virtual=false Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.779517 132400 garbagecollector.go:501] "Processing object" object="openshift-dns/node-resolver" objectUID=3a5a0df1-4ffb-4068-b23f-88f3287a43bc kind="DaemonSet" virtual=false Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.779529 132400 garbagecollector.go:501] "Processing object" object="openshift-ovn-kubernetes/ovnkube-master" objectUID=56f191a7-e11b-474e-8a33-6284d3b3173d kind="DaemonSet" virtual=false Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.779540 132400 garbagecollector.go:501] "Processing object" object="openshift-ovn-kubernetes/ovnkube-node" objectUID=f76c5dcb-0052-4bdc-a028-822babd6b0f4 kind="DaemonSet" virtual=false Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.782121 132400 garbagecollector.go:540] object [apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-node, uid: f76c5dcb-0052-4bdc-a028-822babd6b0f4]'s doesn't have an owner, continue on next item Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.782258 132400 garbagecollector.go:540] object [apps/v1/DaemonSet, namespace: openshift-dns, name: dns-default, uid: 6dff73e2-7d97-46b5-9736-9e140b949edb]'s doesn't have an owner, continue on next item Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.782418 132400 garbagecollector.go:540] object [apps/v1/Deployment, namespace: openshift-storage, name: topolvm-controller, uid: 7e68807e-0f97-4e47-ad8e-9776477eda7e]'s doesn't have an owner, continue on next item Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.782489 132400 garbagecollector.go:540] object [apps/v1/DaemonSet, namespace: openshift-dns, name: node-resolver, uid: 3a5a0df1-4ffb-4068-b23f-88f3287a43bc]'s doesn't have an owner, continue on next item Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.782535 132400 garbagecollector.go:540] object [apps/v1/DaemonSet, namespace: openshift-storage, name: topolvm-node, uid: 8b198a25-2b8c-4bef-8a13-9af10c5c1b48]'s doesn't have an owner, continue on next item Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.782651 132400 garbagecollector.go:540] object [apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-master, uid: 56f191a7-e11b-474e-8a33-6284d3b3173d]'s doesn't have an owner, continue on next item Feb 13 04:05:29 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:29.783180 132400 garbagecollector.go:552] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"localhost.localdomain", UID:"70bb6f13-9d74-43e1-88b9-ae306e6a4400", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"localhost.localdomain", UID:"2a8df776-fc9e-4251-80cd-40da01e341a1", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}, will not garbage collect Feb 13 04:05:31 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:31.326706 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:05:31 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:31.326969 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:05:31 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:31.805770 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:31 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:31.805971 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:33.286713 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:05:34 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:34.573909 132400 crd.go:166] Applied openshift CRD crd/route.crd.yaml Feb 13 04:05:34 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:34.573936 132400 crd.go:155] Applying openshift CRD components/lvms/topolvm.io_logicalvolumes.yaml Feb 13 04:05:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:38.286642 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:39.577267 132400 crd.go:166] Applied openshift CRD components/lvms/topolvm.io_logicalvolumes.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:39.577287 132400 openshift-crd-manager.go:46] openshift-crd-manager applied default CRDs Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:39.577290 132400 openshift-crd-manager.go:48] openshift-crd-manager waiting for CRDs acceptance before proceeding Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:39.577739 132400 crd.go:81] Waiting for crd crd/0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml condition.type: established Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:39.578940 132400 crd.go:81] Waiting for crd crd/0000_03_security-openshift_01_scc.crd.yaml condition.type: established Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:39.581423 132400 crd.go:81] Waiting for crd crd/route.crd.yaml condition.type: established Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:39.583924 132400 crd.go:81] Waiting for crd components/lvms/topolvm.io_logicalvolumes.yaml condition.type: established Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:39.585183 132400 openshift-crd-manager.go:52] openshift-crd-manager all CRDs are ready Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-crd-manager I0213 04:05:39.585275 132400 manager.go:119] openshift-crd-manager completed Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.585319 132400 manager.go:114] Starting route-controller-manager Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.585338 132400 manager.go:114] Starting cluster-policy-controller Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.585925 132400 core.go:170] Applying corev1 api controllers/route-controller-manager/0000_50_cluster-openshift-route-controller-manager_00_namespace.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.586067 132400 manager.go:114] Starting openshift-default-scc-manager Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.587721 132400 rbac.go:144] Applying rbac controllers/route-controller-manager/ingress-to-route-controller-clusterrole.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.588827 132400 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-informer-clusterrole.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.589632 132400 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-tokenreview-clusterrole.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.590779 132400 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-anyuid.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller W0213 04:05:39.591524 132400 policy_controller.go:74] "openshift.io/resourcequota" is disabled Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller W0213 04:05:39.591533 132400 policy_controller.go:74] "openshift.io/cluster-quota-reconciliation" is disabled Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.591536 132400 policy_controller.go:78] Starting "openshift.io/cluster-csr-approver" Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.591996 132400 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-hostaccess.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.592761 132400 policy_controller.go:88] Started "openshift.io/cluster-csr-approver" Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.592770 132400 policy_controller.go:78] Starting "openshift.io/podsecurity-admission-label-syncer" Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.592828 132400 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_csr-approver-controller Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.593150 132400 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-hostmount-anyuid.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.594001 132400 policy_controller.go:88] Started "openshift.io/podsecurity-admission-label-syncer" Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.594015 132400 policy_controller.go:78] Starting "openshift.io/namespace-security-allocation" Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.594231 132400 base_controller.go:67] Waiting for caches to sync for pod-security-admission-label-synchronization-controller Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.594245 132400 event.go:285] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.594422 132400 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-hostnetwork-v2.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.595684 132400 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-hostnetwork.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.600521 132400 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-nonroot-v2.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.600948 132400 policy_controller.go:88] Started "openshift.io/namespace-security-allocation" Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.600958 132400 policy_controller.go:91] Started Origin Controllers Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.601220 132400 base_controller.go:67] Waiting for caches to sync for namespace-security-allocation-controller Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.601274 132400 event.go:285] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.603917 132400 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-nonroot.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.609079 132400 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-privileged.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.609701 132400 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:replicaset-controller" not found Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.609771 132400 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:scc:restricted-v2" not found Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.609994 132400 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ingress-router" not found Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.610029 132400 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:cronjob-controller" not found Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.610812 132400 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-restricted-v2.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.612231 132400 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-restricted.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.613283 132400 rbac.go:144] Applying rbac controllers/route-controller-manager/ingress-to-route-controller-clusterrolebinding.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.614572 132400 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-informer-clusterrolebinding.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.615353 132400 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-tokenreview-clusterrolebinding.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.616483 132400 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-leader-role.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.625197 132400 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-separate-sa-role.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.626144 132400 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-anyuid.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.627282 132400 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-hostaccess.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.628015 132400 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-hostmount-anyuid.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.628778 132400 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork-v2.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.629567 132400 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.630287 132400 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-nonroot-v2.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.631088 132400 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-nonroot.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.631797 132400 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-privileged.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.632507 132400 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-restricted-v2.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.633241 132400 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-restricted.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.634426 132400 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_crb-systemauthenticated-scc-restricted-v2.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.635295 132400 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-leader-rolebinding.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.635404 132400 openshift-default-scc-manager.go:50] openshift-default-scc-manager applied default SCCs Feb 13 04:05:39 localhost.localdomain microshift[132400]: openshift-default-scc-manager I0213 04:05:39.635413 132400 manager.go:119] openshift-default-scc-manager completed Feb 13 04:05:39 localhost.localdomain microshift[132400]: microshift-mdns-controller I0213 04:05:39.635428 132400 manager.go:114] Starting microshift-mdns-controller Feb 13 04:05:39 localhost.localdomain microshift[132400]: microshift-mdns-controller I0213 04:05:39.635579 132400 controller.go:67] mDNS: Starting server on interface "lo", NodeIP "192.168.122.17", NodeName "localhost.localdomain" Feb 13 04:05:39 localhost.localdomain microshift[132400]: microshift-mdns-controller I0213 04:05:39.636065 132400 controller.go:67] mDNS: Starting server on interface "ens3", NodeIP "192.168.122.17", NodeName "localhost.localdomain" Feb 13 04:05:39 localhost.localdomain microshift[132400]: microshift-mdns-controller I0213 04:05:39.636165 132400 controller.go:67] mDNS: Starting server on interface "br-ex", NodeIP "192.168.122.17", NodeName "localhost.localdomain" Feb 13 04:05:39 localhost.localdomain microshift[132400]: microshift-mdns-controller I0213 04:05:39.636310 132400 routes.go:30] Starting MicroShift mDNS route watcher Feb 13 04:05:39 localhost.localdomain microshift[132400]: microshift-mdns-controller I0213 04:05:39.636768 132400 routes.go:73] mDNS: waiting for route API to be ready Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.637900 132400 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-separate-sa-rolebinding.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: microshift-mdns-controller I0213 04:05:39.637997 132400 routes.go:87] mDNS: Route API ready, watching routers Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.639025 132400 core.go:170] Applying corev1 api controllers/route-controller-manager/route-controller-sa.yaml Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.640302 132400 controller_manager.go:26] Starting controllers on 0.0.0.0:8445 (unknown) Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.640841 132400 leaderelection.go:248] attempting to acquire leader lease openshift-route-controller-manager/openshift-route-controllers... Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.640873 132400 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8445 Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.649850 132400 leaderelection.go:258] successfully acquired lease openshift-route-controller-manager/openshift-route-controllers Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager W0213 04:05:39.650002 132400 route.go:78] "openshift.io/ingress-ip" is disabled Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.650010 132400 route.go:81] Starting "openshift.io/ingress-to-route" Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.650090 132400 event.go:285] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-route-controller-manager", Name:"openshift-route-controllers", UID:"f9541bb7-52a6-4020-872b-761892fc7343", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"9906", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' localhost.localdomain became leader Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.659312 132400 ingress.go:262] ingress-to-route metrics registered with prometheus Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.659419 132400 route.go:91] Started "openshift.io/ingress-to-route" Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.659451 132400 route.go:93] Started Route Controllers Feb 13 04:05:39 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:39.659467 132400 ingress.go:313] Starting controller Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.693893 132400 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_csr-approver-controller Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.693906 132400 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.695014 132400 base_controller.go:73] Caches are synced for pod-security-admission-label-synchronization-controller Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.695025 132400 base_controller.go:110] Starting #1 worker of pod-security-admission-label-synchronization-controller controller ... Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.702214 132400 base_controller.go:73] Caches are synced for namespace-security-allocation-controller Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.702224 132400 base_controller.go:110] Starting #1 worker of namespace-security-allocation-controller controller ... Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.702251 132400 namespace_scc_allocation_controller.go:111] Repairing SCC UID Allocations Feb 13 04:05:39 localhost.localdomain microshift[132400]: cluster-policy-controller I0213 04:05:39.708723 132400 namespace_scc_allocation_controller.go:116] Repair complete Feb 13 04:05:40 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:40.650591 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:40 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:40.650621 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:43 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:43.253557 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:05:43 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:43.253578 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:05:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:43.286936 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:05:44 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:44.587604 132400 openshift-route-controller-manager.go:107] route-controller-manager is ready Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.587980 132400 manager.go:114] Starting infrastructure-services-manager Feb 13 04:05:44 localhost.localdomain microshift[132400]: kustomizer I0213 04:05:44.588012 132400 manager.go:114] Starting kustomizer Feb 13 04:05:44 localhost.localdomain microshift[132400]: kustomizer I0213 04:05:44.588039 132400 apply.go:64] No kustomization found at /usr/lib/microshift/manifests/kustomization.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kustomizer I0213 04:05:44.588046 132400 apply.go:64] No kustomization found at /etc/microshift/manifests/kustomization.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kustomizer I0213 04:05:44.588049 132400 manager.go:119] kustomizer completed Feb 13 04:05:44 localhost.localdomain microshift[132400]: version-manager I0213 04:05:44.588055 132400 manager.go:114] Starting version-manager Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.588532 132400 manager.go:114] Starting kubelet Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.588779 132400 rbac.go:144] Applying rbac controllers/kube-controller-manager/csr_approver_clusterrole.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.590807 132400 rbac.go:144] Applying rbac controllers/cluster-policy-controller/namespace-security-allocation-controller-clusterrole.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: version-manager I0213 04:05:44.590985 132400 manager.go:119] version-manager completed Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.591375 132400 server.go:412] "Kubelet version" kubeletVersion="v1.26.0" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.591433 132400 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet W0213 04:05:44.591489 132400 feature_gate.go:242] Setting GA feature gate PodSecurity=true. It will be removed in a future release. Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.591535 132400 feature_gate.go:250] feature gates: &{map[APIPriorityAndFairness:true DownwardAPIHugePages:true PodSecurity:true RotateKubeletServerCertificate:false]} Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet W0213 04:05:44.591602 132400 feature_gate.go:242] Setting GA feature gate PodSecurity=true. It will be removed in a future release. Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.591629 132400 feature_gate.go:250] feature gates: &{map[APIPriorityAndFairness:true DownwardAPIHugePages:true PodSecurity:true RotateKubeletServerCertificate:false]} Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.591800 132400 rbac.go:144] Applying rbac controllers/cluster-policy-controller/podsecurity-admission-label-syncer-controller-clusterrole.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.592274 132400 bootstrap.go:115] "Kubeconfig exists and is valid, skipping bootstrap" path="/var/lib/microshift/resources/kubelet/kubeconfig" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.592966 132400 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/kubelet-ca.crt" Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.593014 132400 rbac.go:144] Applying rbac controllers/kube-controller-manager/csr_approver_clusterrolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.593184 132400 manager.go:163] cAdvisor running in container: "/system.slice/microshift.service" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.593241 132400 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/kubelet-ca.crt" Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.593896 132400 rbac.go:144] Applying rbac controllers/cluster-policy-controller/namespace-security-allocation-controller-clusterrolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.594656 132400 rbac.go:144] Applying rbac controllers/cluster-policy-controller/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.595888 132400 scheduling.go:77] Applying PriorityClass CR core/priority-class-openshift-user-critical.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.597496 132400 core.go:170] Applying corev1 api components/service-ca/ns.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.598542 132400 fs.go:133] Filesystem UUIDs: map[622c7200-2a73-4ba7-9323-3019283857c8:/dev/dm-0 6ea9fb0a-e4e4-4362-8e25-15ae49259dda:/dev/sda1 754C-708D:/dev/sda2] Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.598633 132400 fs.go:134] Filesystem partitions: map[/dev/mapper/rhel-root:{mountpoint:/var major:253 minor:0 fsType:xfs blockSize:0} /dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1260a84b52cd02191f25d30e7feacf38be3841afa9297a6b7d488a7f2f3a868b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1260a84b52cd02191f25d30e7feacf38be3841afa9297a6b7d488a7f2f3a868b/userdata/shm major:0 minor:110 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f704046b0eada00a6ebbf6378cebd6808ad5309eee21cefcb74331a00ea822f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f704046b0eada00a6ebbf6378cebd6808ad5309eee21cefcb74331a00ea822f/userdata/shm major:0 minor:111 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/863fc7c024be45beb72564a4df54f437be81a6123b43f4fd077543ae1d0bb404/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/863fc7c024be45beb72564a4df54f437be81a6123b43f4fd077543ae1d0bb404/userdata/shm major:0 minor:65 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9891b8580a56e81042d26522e66b721e32b7492910ac67262124dfe879712a41/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9891b8580a56e81042d26522e66b721e32b7492910ac67262124dfe879712a41/userdata/shm major:0 minor:224 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b94cda537d9841d96f436db8d79a8e3c42a65f0e5ee07e67c48c22aaa963c51c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b94cda537d9841d96f436db8d79a8e3c42a65f0e5ee07e67c48c22aaa963c51c/userdata/shm major:0 minor:166 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d78ff638d4994bc8a2c44d321b43796ab1d3b3d0cc962d09aa36f9ae2f00db80/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d78ff638d4994bc8a2c44d321b43796ab1d3b3d0cc962d09aa36f9ae2f00db80/userdata/shm major:0 minor:180 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de8c287d21d95b259958f94d9513e73c34740aa654c588be1cdc86b08a7989f8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de8c287d21d95b259958f94d9513e73c34740aa654c588be1cdc86b08a7989f8/userdata/shm major:0 minor:67 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e0b320a743f618a892d007f259fe7a5957f8a5e6a326179dabdc87431a85ba52/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e0b320a743f618a892d007f259fe7a5957f8a5e6a326179dabdc87431a85ba52/userdata/shm major:0 minor:62 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:223 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:25 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/volumes/kubernetes.io~projected/kube-api-access-n5x8k:{mountpoint:/var/lib/kubelet/pods/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/volumes/kubernetes.io~projected/kube-api-access-n5x8k major:0 minor:53 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0390852d-4e2a-4c00-9b0f-cbf1945008a2/volumes/kubernetes.io~projected/kube-api-access-q9d8p:{mountpoint:/var/lib/kubelet/pods/0390852d-4e2a-4c00-9b0f-cbf1945008a2/volumes/kubernetes.io~projected/kube-api-access-q9d8p major:0 minor:52 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e7bce65-b199-4d8a-bc2f-c63494419251/volumes/kubernetes.io~projected/kube-api-access-ghk5h:{mountpoint:/var/lib/kubelet/pods/2e7bce65-b199-4d8a-bc2f-c63494419251/volumes/kubernetes.io~projected/kube-api-access-ghk5h major:0 minor:54 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e7bce65-b199-4d8a-bc2f-c63494419251/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/2e7bce65-b199-4d8a-bc2f-c63494419251/volumes/kubernetes.io~secret/signing-key major:0 minor:48 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41b0089d-73d0-450a-84f5-8bfec82d97f9/volumes/kubernetes.io~projected/kube-api-access-5gtpr:{mountpoint:/var/lib/kubelet/pods/41b0089d-73d0-450a-84f5-8bfec82d97f9/volumes/kubernetes.io~projected/kube-api-access-5gtpr major:0 minor:57 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41b0089d-73d0-450a-84f5-8bfec82d97f9/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/41b0089d-73d0-450a-84f5-8bfec82d97f9/volumes/kubernetes.io~secret/default-certificate major:0 minor:46 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/763e920a-b594-4485-bf77-dfed5dddbf03/volumes/kubernetes.io~empty-dir/lvmd-socket-dir:{mountpoint:/var/lib/kubelet/pods/763e920a-b594-4485-bf77-dfed5dddbf03/volumes/kubernetes.io~empty-dir/lvmd-socket-dir major:0 minor:47 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/763e920a-b594-4485-bf77-dfed5dddbf03/volumes/kubernetes.io~projected/kube-api-access-sjk85:{mountpoint:/var/lib/kubelet/pods/763e920a-b594-4485-bf77-dfed5dddbf03/volumes/kubernetes.io~projected/kube-api-access-sjk85 major:0 minor:51 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9744aca6-9463-42d2-a05e-f1e3af7b175e/volumes/kubernetes.io~projected/kube-api-access-gbckr:{mountpoint:/var/lib/kubelet/pods/9744aca6-9463-42d2-a05e-f1e3af7b175e/volumes/kubernetes.io~projected/kube-api-access-gbckr major:0 minor:50 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c608b4f5-e1d8-4927-9659-5771e2bd21ac/volumes/kubernetes.io~projected/kube-api-access-4gs8j:{mountpoint:/var/lib/kubelet/pods/c608b4f5-e1d8-4927-9659-5771e2bd21ac/volumes/kubernetes.io~projected/kube-api-access-4gs8j major:0 minor:55 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/volumes/kubernetes.io~projected/kube-api-access-qgssb:{mountpoint:/var/lib/kubelet/pods/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/volumes/kubernetes.io~projected/kube-api-access-qgssb major:0 minor:56 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/volumes/kubernetes.io~secret/metrics-tls major:0 minor:49 fsType:tmpfs blockSize:0} overlay_0-105:{mountpoint:/var/lib/containers/storage/overlay/18cb3f04fdac6b1f103f0a3e68b69856a689a266f72bb57718993758ba1c74d5/merged major:0 minor:105 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/b7832cf4ea90333893233358ab9b7b37ccba401679fc8df60888229fede7c7a3/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/09ebd058f137a0a77ffce66c97d434ca3bd4487f822d48381cff98e275705f34/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/3ef025c212f62fd35a6505e7fea4f2d0c1c35f1554f34092c7b1133571cee6e1/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-120:{mountpoint:/var/lib/containers/storage/overlay/259192376cda6e2e1ede487a048d2daab82d132def62a31c6a254c5dca1b400b/merged major:0 minor:120 fsType:overlay blockSize:0} overlay_0-137:{mountpoint:/var/lib/containers/storage/overlay/e07ebe57dcfa4cca1186abb03d77af250b07f3c0695920fcdf1d750a128939d0/merged major:0 minor:137 fsType:overlay blockSize:0} overlay_0-139:{mountpoint:/var/lib/containers/storage/overlay/a50c27eb9c1b1da96260a2b52d7b79c3dff6bb3a775e18717d89b51925e1cfd9/merged major:0 minor:139 fsType:overlay blockSize:0} overlay_0-145:{mountpoint:/var/lib/containers/storage/overlay/3840298c215d81849badeb9afe65e14d109e07a26272ddbe55bd7650d7a96bd0/merged major:0 minor:145 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/aa2f05c84e2cc268bde8849e429ea749a8760af445250d6f0e97e9d68aa6cc84/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/a9ef32aa5d39fb20e80afdf53aa1b12884265767b235d1d1080e235f72721b0f/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/131d77ec57da8d85cec68519846732d654c49c81b70fcec3a22913877332b372/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/66c5328dab54b09d8462882e9858bf9694a54e77ba6e238e96068c217108011c/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-182:{mountpoint:/var/lib/containers/storage/overlay/f2dca01abd1dbeab1f986915c6f822e85e9509cedf3422b981d54077d5d3d84b/merged major:0 minor:182 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/53f3f1c259755dff3be5e8aec764dd4a186ed4fcc6bab6852de985e71760c9fe/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-186:{mountpoint:/var/lib/containers/storage/overlay/fca069effbb2b3ead6d5eaeb8078ca3051d1815a6a8461f36ba0c0b57cdd0bc2/merged major:0 minor:186 fsType:overlay blockSize:0} overlay_0-196:{mountpoint:/var/lib/containers/storage/overlay/5aa08c2eff1aa361750b9eb0927e912c5783b09a8e21985bc5761e43bc7b7789/merged major:0 minor:196 fsType:overlay blockSize:0} overlay_0-205:{mountpoint:/var/lib/containers/storage/overlay/727a6c049d26c23615b0aaa7e7954a2ca2bcf1df0f9389051db9559370ec29aa/merged major:0 minor:205 fsType:overlay blockSize:0} overlay_0-214:{mountpoint:/var/lib/containers/storage/overlay/4923416ecc31f8df0fef746221fc9e27b8d65937ee1f02a595527ec6992c433d/merged major:0 minor:214 fsType:overlay blockSize:0} overlay_0-226:{mountpoint:/var/lib/containers/storage/overlay/f82e8bb1c0979d7ef9a100c389626ed6490d429bcd6b6214cd947e911045bc27/merged major:0 minor:226 fsType:overlay blockSize:0} overlay_0-228:{mountpoint:/var/lib/containers/storage/overlay/25253d441f79fbd9b0831545c446ae71e18ea4fce1f1bed4219f9477bb3577d7/merged major:0 minor:228 fsType:overlay blockSize:0} overlay_0-63:{mountpoint:/var/lib/containers/storage/overlay/7aaa334463da1fbfc2b1ea378b741161926c64afcf102f46de38fa6ee3d4efef/merged major:0 minor:63 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/b26fcc85b27b83e778621b43457cf7864b91754c218d9c13fc86f8bdb6e39b23/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/2ceae0159b51a7d4f7a9e6d088b0373620c285341da3dd007ea649bcc803a9e2/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-73:{mountpoint:/var/lib/containers/storage/overlay/c65697addac8fa5bbb28b96367daaade3dd2217bef18b18ac6ee5cf70bb44366/merged major:0 minor:73 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/04e76ac05005a49afbb803071bcea3cf49105ab83c4e54bf83fd0ffb5b4ad90a/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/533ade7c58eb5ec3f334f615b96b89cf823129b788dfbd69c488b332bf971697/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/b3e87e154978fdc80ccb6ffb6370af426292c6538818a618cd6a6bdbb7fca9fc/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/b91071c6bdd40f026095ef279720ddca2a7106c3d49c1050642cc214cd59683e/merged major:0 minor:82 fsType:overlay blockSize:0}] Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.598807 132400 nvidia.go:55] NVIDIA GPU metrics disabled Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.599839 132400 rbac.go:144] Applying rbac components/service-ca/clusterrolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.601110 132400 rbac.go:144] Applying rbac components/service-ca/clusterrole.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.601206 132400 manager.go:212] Machine: {Timestamp:2023-02-13 04:05:44.600958723 -0500 EST m=+31.781305008 CPUVendorID:GenuineIntel NumCores:2 NumPhysicalCores:1 NumSockets:2 CpuFrequency:2995200 MemoryCapacity:2980528128 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:b272982b892b46ecad465d45bd7d3b62 SystemUUID:b272982b-892b-46ec-ad46-5d45bd7d3b62 BootID:fb9632ca-c89d-4c51-815b-dd7303245040 Filesystems:[{Device:/var/lib/kubelet/pods/763e920a-b594-4485-bf77-dfed5dddbf03/volumes/kubernetes.io~empty-dir/lvmd-socket-dir DeviceMajor:0 DeviceMinor:47 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:/var/lib/kubelet/pods/0390852d-4e2a-4c00-9b0f-cbf1945008a2/volumes/kubernetes.io~projected/kube-api-access-q9d8p DeviceMajor:0 DeviceMinor:52 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:/var/lib/kubelet/pods/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/volumes/kubernetes.io~projected/kube-api-access-n5x8k DeviceMajor:0 DeviceMinor:53 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9891b8580a56e81042d26522e66b721e32b7492910ac67262124dfe879712a41/userdata/shm DeviceMajor:0 DeviceMinor:224 Capacity:67108864 Type:vfs Inodes:363834 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e0b320a743f618a892d007f259fe7a5957f8a5e6a326179dabdc87431a85ba52/userdata/shm DeviceMajor:0 DeviceMinor:62 Capacity:67108864 Type:vfs Inodes:363834 HasInodes:true} {Device:overlay_0-105 DeviceMajor:0 DeviceMinor:105 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-182 DeviceMajor:0 DeviceMinor:182 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-186 DeviceMajor:0 DeviceMinor:186 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d78ff638d4994bc8a2c44d321b43796ab1d3b3d0cc962d09aa36f9ae2f00db80/userdata/shm DeviceMajor:0 DeviceMinor:180 Capacity:67108864 Type:vfs Inodes:363834 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:1490264064 Type:vfs Inodes:363834 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:25 Capacity:1490264064 Type:vfs Inodes:363834 HasInodes:true} {Device:/var/lib/kubelet/pods/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:49 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:/var/lib/kubelet/pods/2e7bce65-b199-4d8a-bc2f-c63494419251/volumes/kubernetes.io~projected/kube-api-access-ghk5h DeviceMajor:0 DeviceMinor:54 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b94cda537d9841d96f436db8d79a8e3c42a65f0e5ee07e67c48c22aaa963c51c/userdata/shm DeviceMajor:0 DeviceMinor:166 Capacity:67108864 Type:vfs Inodes:363834 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:/var/lib/kubelet/pods/763e920a-b594-4485-bf77-dfed5dddbf03/volumes/kubernetes.io~projected/kube-api-access-sjk85 DeviceMajor:0 DeviceMinor:51 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:/var/lib/kubelet/pods/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/volumes/kubernetes.io~projected/kube-api-access-qgssb DeviceMajor:0 DeviceMinor:56 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-205 DeviceMajor:0 DeviceMinor:205 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:/var/lib/kubelet/pods/9744aca6-9463-42d2-a05e-f1e3af7b175e/volumes/kubernetes.io~projected/kube-api-access-gbckr DeviceMajor:0 DeviceMinor:50 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:overlay_0-73 DeviceMajor:0 DeviceMinor:73 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f704046b0eada00a6ebbf6378cebd6808ad5309eee21cefcb74331a00ea822f/userdata/shm DeviceMajor:0 DeviceMinor:111 Capacity:67108864 Type:vfs Inodes:363834 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:/var/lib/kubelet/pods/2e7bce65-b199-4d8a-bc2f-c63494419251/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:48 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:overlay_0-214 DeviceMajor:0 DeviceMinor:214 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:223 Capacity:298049536 Type:vfs Inodes:363834 HasInodes:true} {Device:/var/lib/kubelet/pods/41b0089d-73d0-450a-84f5-8bfec82d97f9/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:46 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:/run/containers/storage/overlay-containers/863fc7c024be45beb72564a4df54f437be81a6123b43f4fd077543ae1d0bb404/userdata/shm DeviceMajor:0 DeviceMinor:65 Capacity:67108864 Type:vfs Inodes:363834 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-226 DeviceMajor:0 DeviceMinor:226 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-228 DeviceMajor:0 DeviceMinor:228 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-196 DeviceMajor:0 DeviceMinor:196 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:832446464 Type:vfs Inodes:409600 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:1490264064 Type:vfs Inodes:363834 HasInodes:true} {Device:overlay_0-120 DeviceMajor:0 DeviceMinor:120 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-139 DeviceMajor:0 DeviceMinor:139 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de8c287d21d95b259958f94d9513e73c34740aa654c588be1cdc86b08a7989f8/userdata/shm DeviceMajor:0 DeviceMinor:67 Capacity:67108864 Type:vfs Inodes:363834 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-63 DeviceMajor:0 DeviceMinor:63 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-137 DeviceMajor:0 DeviceMinor:137 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-145 DeviceMajor:0 DeviceMinor:145 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1260a84b52cd02191f25d30e7feacf38be3841afa9297a6b7d488a7f2f3a868b/userdata/shm DeviceMajor:0 DeviceMinor:110 Capacity:67108864 Type:vfs Inodes:363834 HasInodes:true} {Device:/var/lib/kubelet/pods/c608b4f5-e1d8-4927-9659-5771e2bd21ac/volumes/kubernetes.io~projected/kube-api-access-4gs8j DeviceMajor:0 DeviceMinor:55 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:/dev/mapper/rhel-root DeviceMajor:253 DeviceMinor:0 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:/var/lib/kubelet/pods/41b0089d-73d0-450a-84f5-8bfec82d97f9/volumes/kubernetes.io~projected/kube-api-access-5gtpr DeviceMajor:0 DeviceMinor:57 Capacity:2980528128 Type:vfs Inodes:363834 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:10726932480 Type:vfs Inodes:5242880 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:10737418240 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:21474836480 Scheduler:mq-deadline}] NetworkDevices:[{Name:1260a84b52cd021 MacAddress:92:3c:ac:d3:f4:50 Speed:10000 Mtu:1400} {Name:3f704046b0eada0 MacAddress:36:98:a1:e4:e2:f9 Speed:10000 Mtu:1400} {Name:9891b8580a56e81 MacAddress:8e:be:dc:cf:b4:75 Speed:10000 Mtu:1400} {Name:b94cda537d9841d MacAddress:f2:f7:16:2b:75:bb Speed:10000 Mtu:1400} {Name:br-ex MacAddress:52:54:00:26:a5:8a Speed:0 Mtu:1500} {Name:d78ff638d4994bc MacAddress:ae:57:02:a6:44:da Speed:10000 Mtu:1400} {Name:ens3 MacAddress:52:54:00:26:a5:8a Speed:-1 Mtu:1500} {Name:ovs-system MacAddress:0e:7c:83:e1:f6:b2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:2980528128 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:4194304 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:4194304 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.601363 132400 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.601835 132400 manager.go:228] Version: {KernelVersion:4.18.0-425.10.1.el8_7.x86_64 ContainerOsVersion:Red Hat Enterprise Linux 8.7 (Ootpa) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.601930 132400 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.602337 132400 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.602389 132400 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/system.slice/crio.service SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.602420 132400 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.602350 132400 rbac.go:144] Applying rbac components/service-ca/rolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.602445 132400 container_manager_linux.go:308] "Creating device plugin manager" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.602482 132400 manager.go:125] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.602508 132400 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.602535 132400 state_mem.go:36] "Initialized new in-memory state store" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.603605 132400 remote_runtime.go:121] "Validated CRI v1 runtime API" Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.603841 132400 rbac.go:144] Applying rbac components/service-ca/role.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.604403 132400 remote_image.go:97] "Validated CRI v1 image API" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.604447 132400 server.go:1147] "Using root directory" path="/var/lib/kubelet" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.604488 132400 kubelet.go:407] "Attempting to sync node with API server" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.604512 132400 kubelet.go:306] "Adding apiserver pod source" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.604537 132400 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605415 132400 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="cri-o" version="1.25.2-4.rhaos4.12.git66af2f6.el8" apiVersion="v1" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605598 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605638 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605680 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605719 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605745 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605764 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605785 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605805 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605826 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605845 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605891 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605911 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.605935 132400 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.606059 132400 server.go:1186] "Started kubelet" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet E0213 04:05:44.606280 132400 kubelet.go:1399] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.607478 132400 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.608283 132400 core.go:170] Applying corev1 api components/service-ca/sa.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.609570 132400 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.610006 132400 server.go:451] "Adding debug handlers to kubelet server" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.612470 132400 volume_manager.go:291] "The desired_state_of_world populator starts" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.612504 132400 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.615097 132400 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.615238 132400 factory.go:55] Registering systemd factory Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.617800 132400 factory.go:153] Registering CRI-O factory Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.617837 132400 factory.go:103] Registering Raw factory Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.617857 132400 manager.go:1201] Started watching for new ooms in manager Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.618165 132400 manager.go:302] Starting recovery of all containers Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.624985 132400 manager.go:307] Recovery completed Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.631788 132400 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.661445 132400 apps.go:94] Applying apps api components/service-ca/deployment.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.662825 132400 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.662908 132400 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.662954 132400 kubelet.go:2133] "Starting kubelet main sync loop" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet E0213 04:05:44.662996 132400 kubelet.go:2157] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.663315 132400 cpu_manager.go:215] "Starting CPU manager" policy="none" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.663323 132400 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.663330 132400 state_mem.go:36] "Initialized new in-memory state store" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.663388 132400 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.663395 132400 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.663399 132400 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.663402 132400 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.663406 132400 policy_none.go:49] "None policy: Start" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.667613 132400 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.667643 132400 state_mem.go:35] "Initializing new in-memory state store" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.667895 132400 state_mem.go:75] "Updated machine memory state" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.667903 132400 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.671283 132400 generic.go:332] "Generic (PLEG): container finished" podID=c608b4f5-e1d8-4927-9659-5771e2bd21ac containerID="9b6d8a32ca92f23632ee5e9b6ba757dda187d1eb44ad57b68f47afebaf9d8728" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:44.672516 132400 controller.go:615] quota admission added evaluator for: deployments.apps Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.674781 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6gpbh_0390852d-4e2a-4c00-9b0f-cbf1945008a2/ovn-controller/2.log" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.674858 132400 generic.go:332] "Generic (PLEG): container finished" podID=0390852d-4e2a-4c00-9b0f-cbf1945008a2 containerID="a257774a21f8c38279f035d34f1e830a5944dccd0853d98f98cbbe2b7862176e" exitCode=143 Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.680589 132400 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1743570033147009 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DeploymentUpdated,Message:Updated Deployment.apps/service-ca -n openshift-service-ca because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-02-13 04:05:44.680550409 -0500 EST m=+31.860896691,LastTimestamp:2023-02-13 04:05:44.680550409 -0500 EST m=+31.860896691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.681250 132400 storage.go:69] Applying sc components/lvms/topolvm_default-storage-class.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.683913 132400 storage.go:126] Applying csiDriver components/lvms/csi-driver.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.686351 132400 core.go:170] Applying corev1 api components/lvms/topolvm-openshift-storage_namespace.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.688648 132400 core.go:170] Applying corev1 api components/lvms/topolvm-node_v1_serviceaccount.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.691149 132400 core.go:170] Applying corev1 api components/lvms/topolvm-controller_v1_serviceaccount.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.692478 132400 rbac.go:144] Applying rbac components/lvms/topolvm-controller_rbac.authorization.k8s.io_v1_role.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.692919 132400 generic.go:332] "Generic (PLEG): container finished" podID=41b0089d-73d0-450a-84f5-8bfec82d97f9 containerID="e5f3457666c38518369411a2705160dbd818b3ad18473d1ee90d666928dbd8d5" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.693651 132400 rbac.go:144] Applying rbac components/lvms/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_role.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.693988 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="302aa47ff46bb2d6b45572d0d1cf955fa76bcf2ceb0ffe9556913a63a67cceba" exitCode=2 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.694027 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="256bc3b44bcaf0961e036a129772ecb541572ae9099fea3714a9ee5ee89327de" exitCode=2 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.694072 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="9ee30baebfe0e235ffeed4baf7789c79901e924878d900552badae4730ef7bcc" exitCode=2 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.694103 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="1575137a44aa2b36604a81c6790648f4e1d8da769f62cc3a71df04b8196af660" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.694129 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="97a2a926204ad7c917872849ebc0e449918a10cd5c1bfc93f4f29381a484171e" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.694978 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="7e387da48f96a4b09e4f1b967bf7bfec98e906343b0b6994358216503c15441f" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.695389 132400 rbac.go:144] Applying rbac components/lvms/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_role.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.695890 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="f255f4eb69604c1e6bf50f06cf96a70fe880978735334642d0ed181376f9111f" exitCode=143 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.695899 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="97352d67f478b1d1c8cd5883527175619c9af710caaace61e1ed9a8c7db3eb97" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.695905 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="28a23b4dfe4e59e0c01a5e229a51deae1a8c2d130eb30f1367867e0c024b828b" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.695911 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="4d9a087ce5c7df4274efa026c3ed47959db4974177b5dc845636bd745df49f3f" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.695916 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="9ea8319697af0764b9e431751b819b8e571bd2a58f7e7c706249c359ac413df9" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.696361 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="7349fc23698f553ff5489f20d2454801ebbe0dc758fc306ecf0a8498469dabdb" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.696371 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="a3204dbcdf71733932b6a8aa25e9862e76bc7e8fb6ff481d6ec7066d415ffa1f" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.697040 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/ovnkube-master/2.log" Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.697187 132400 rbac.go:144] Applying rbac components/lvms/topolvm-controller_rbac.authorization.k8s.io_v1_rolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.697427 132400 generic.go:332] "Generic (PLEG): container finished" podID=0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc containerID="20535f54100b0f7f6e34e5d0ec94424dbb95ea06e69ce637950770f3db6f820e" exitCode=2 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.697757 132400 generic.go:332] "Generic (PLEG): container finished" podID=0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc containerID="3165b5e479fa48097d83984fec120a1f040ab983b8b6306196d243e6719d6a9d" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.697803 132400 generic.go:332] "Generic (PLEG): container finished" podID=0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc containerID="6c9d73c5e0390892a9c786fe39100f9747806ab28f666c35778e72923fb5c757" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.697829 132400 generic.go:332] "Generic (PLEG): container finished" podID=0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc containerID="d3ba1fdb0d00bff15c239ea3ece72cc195b33b14da8b025f38a2a7e130e5f05e" exitCode=0 Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.697931 132400 rbac.go:144] Applying rbac components/lvms/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_rolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.698679 132400 rbac.go:144] Applying rbac components/lvms/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_rolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.699916 132400 rbac.go:144] Applying rbac components/lvms/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_clusterrole.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.700865 132400 rbac.go:144] Applying rbac components/lvms/topolvm-controller_rbac.authorization.k8s.io_v1_clusterrole.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.701626 132400 rbac.go:144] Applying rbac components/lvms/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_clusterrole.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.702558 132400 rbac.go:144] Applying rbac components/lvms/topolvm-node-scc_rbac.authorization.k8s.io_v1_clusterrole.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.703256 132400 rbac.go:144] Applying rbac components/lvms/topolvm-node_rbac.authorization.k8s.io_v1_clusterrole.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.704486 132400 rbac.go:144] Applying rbac components/lvms/topolvm-controller_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.705293 132400 rbac.go:144] Applying rbac components/lvms/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.705992 132400 rbac.go:144] Applying rbac components/lvms/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.706668 132400 rbac.go:144] Applying rbac components/lvms/topolvm-node-scc_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.707269 132400 rbac.go:144] Applying rbac components/lvms/topolvm-node_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.708320 132400 core.go:170] Applying corev1 api components/lvms/topolvm-lvmd-config_configmap_v1.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.709584 132400 apps.go:94] Applying apps api components/lvms/topolvm-controller_deployment.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.712898 132400 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.713370 132400 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.713422 132400 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.713454 132400 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.713480 132400 kubelet_node_status.go:72] "Attempting to register node" node="localhost.localdomain" Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.728092 132400 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1743570035e92cf4 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DeploymentUpdated,Message:Updated Deployment.apps/topolvm-controller -n openshift-storage because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-02-13 04:05:44.728046836 -0500 EST m=+31.908393123,LastTimestamp:2023-02-13 04:05:44.728046836 -0500 EST m=+31.908393123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.728735 132400 apps.go:94] Applying apps api components/lvms/topolvm-node_daemonset.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:44.740299 132400 controller.go:615] quota admission added evaluator for: daemonsets.apps Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.745183 132400 kubelet_node_status.go:110] "Node was previously registered" node="localhost.localdomain" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.745238 132400 kubelet_node_status.go:75] "Successfully registered node" node="localhost.localdomain" Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.745922 132400 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1743570036f92129 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetUpdated,Message:Updated DaemonSet.apps/topolvm-node -n openshift-storage because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-02-13 04:05:44.745869609 -0500 EST m=+31.926215892,LastTimestamp:2023-02-13 04:05:44.745869609 -0500 EST m=+31.926215892,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.746122 132400 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.746423 132400 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.746888 132400 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeNotReady" Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet I0213 04:05:44.746955 132400 setters.go:548] "Node became not ready" node="localhost.localdomain" condition={Type:Ready Status:False LastHeartbeatTime:2023-02-13 04:05:44.746880168 -0500 EST m=+31.927226441 LastTransitionTime:2023-02-13 04:05:44.746880168 -0500 EST m=+31.927226441 Reason:KubeletNotReady Message:container runtime status check may not have completed yet} Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.747880 132400 scc.go:87] Applying scc api components/lvms/topolvm-node-securitycontextconstraint.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.750049 132400 core.go:170] Applying corev1 api components/openshift-router/namespace.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.752201 132400 rbac.go:144] Applying rbac components/openshift-router/cluster-role.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:44.760422 132400 topologycache.go:212] Ignoring node localhost.localdomain because it has an excluded label Feb 13 04:05:44 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:44.760436 132400 topologycache.go:248] Insufficient node info for topology hints (0 zones, 0 CPU, true) Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.764083 132400 rbac.go:144] Applying rbac components/openshift-router/cluster-role-binding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet E0213 04:05:44.764564 132400 kubelet.go:2157] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.765754 132400 core.go:170] Applying corev1 api components/openshift-router/service-account.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.773999 132400 core.go:170] Applying corev1 api components/openshift-router/configmap.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.797469 132400 recorder_logging.go:44] &Event{ObjectMeta:{dummy.174357003a0be2a6 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapUpdated,Message:Updated ConfigMap/service-ca-bundle -n openshift-ingress: Feb 13 04:05:44 localhost.localdomain microshift[132400]: cause by changes in data.service-ca.crt,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-02-13 04:05:44.797430438 -0500 EST m=+31.977776724,LastTimestamp:2023-02-13 04:05:44.797430438 -0500 EST m=+31.977776724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.798064 132400 core.go:170] Applying corev1 api components/openshift-router/service-internal.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.827199 132400 apps.go:94] Applying apps api components/openshift-router/deployment.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.833116 132400 recorder_logging.go:44] &Event{ObjectMeta:{dummy.174357003c2be5a2 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DeploymentUpdated,Message:Updated Deployment.apps/router-default -n openshift-ingress because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-02-13 04:05:44.833082786 -0500 EST m=+32.013429062,LastTimestamp:2023-02-13 04:05:44.833082786 -0500 EST m=+32.013429062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.833585 132400 core.go:170] Applying corev1 api components/openshift-dns/dns/namespace.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.835058 132400 core.go:170] Applying corev1 api components/openshift-dns/dns/service.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.836782 132400 rbac.go:144] Applying rbac components/openshift-dns/dns/cluster-role.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.838148 132400 rbac.go:144] Applying rbac components/openshift-dns/dns/cluster-role-binding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.839372 132400 core.go:170] Applying corev1 api components/openshift-dns/dns/service-account.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.840152 132400 core.go:170] Applying corev1 api components/openshift-dns/node-resolver/service-account.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.841497 132400 core.go:170] Applying corev1 api components/openshift-dns/dns/configmap.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.843070 132400 apps.go:94] Applying apps api components/openshift-dns/dns/daemonset.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.848892 132400 recorder_logging.go:44] &Event{ObjectMeta:{dummy.174357003d1c9275 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetUpdated,Message:Updated DaemonSet.apps/dns-default -n openshift-dns because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-02-13 04:05:44.848855669 -0500 EST m=+32.029201949,LastTimestamp:2023-02-13 04:05:44.848855669 -0500 EST m=+32.029201949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.848980 132400 apps.go:94] Applying apps api components/openshift-dns/node-resolver/daemonset.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.853428 132400 recorder_logging.go:44] &Event{ObjectMeta:{dummy.174357003d61e0b3 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetUpdated,Message:Updated DaemonSet.apps/node-resolver -n openshift-dns because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-02-13 04:05:44.853397683 -0500 EST m=+32.033743962,LastTimestamp:2023-02-13 04:05:44.853397683 -0500 EST m=+32.033743962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.853516 132400 ovn.go:67] OVNKubernetes config file not found, assuming default values Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.853950 132400 core.go:170] Applying corev1 api components/ovn/namespace.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.855435 132400 core.go:170] Applying corev1 api components/ovn/node/serviceaccount.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.856394 132400 core.go:170] Applying corev1 api components/ovn/master/serviceaccount.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.857927 132400 rbac.go:144] Applying rbac components/ovn/role.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.859305 132400 rbac.go:144] Applying rbac components/ovn/rolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.860819 132400 rbac.go:144] Applying rbac components/ovn/clusterrole.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.862377 132400 rbac.go:144] Applying rbac components/ovn/clusterrolebinding.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.863885 132400 core.go:170] Applying corev1 api components/ovn/configmap.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.865492 132400 apps.go:94] Applying apps api components/ovn/master/daemonset.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.873834 132400 recorder_logging.go:44] &Event{ObjectMeta:{dummy.174357003e99374c dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetUpdated,Message:Updated DaemonSet.apps/ovnkube-master -n openshift-ovn-kubernetes because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-02-13 04:05:44.873801548 -0500 EST m=+32.054147831,LastTimestamp:2023-02-13 04:05:44.873801548 -0500 EST m=+32.054147831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.873847 132400 apps.go:94] Applying apps api components/ovn/node/daemonset.yaml Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.878732 132400 recorder_logging.go:44] &Event{ObjectMeta:{dummy.174357003ee3d1bc dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetUpdated,Message:Updated DaemonSet.apps/ovnkube-node -n openshift-ovn-kubernetes because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-02-13 04:05:44.878690748 -0500 EST m=+32.059037031,LastTimestamp:2023-02-13 04:05:44.878690748 -0500 EST m=+32.059037031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.878853 132400 infra-services-controller.go:61] infrastructure-services-manager launched ocp componets Feb 13 04:05:44 localhost.localdomain microshift[132400]: infrastructure-services-manager I0213 04:05:44.878880 132400 manager.go:119] infrastructure-services-manager completed Feb 13 04:05:44 localhost.localdomain microshift[132400]: kubelet E0213 04:05:44.965080 132400 kubelet.go:2157] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 04:05:45 localhost.localdomain microshift[132400]: kubelet E0213 04:05:45.365302 132400 kubelet.go:2157] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 04:05:45 localhost.localdomain microshift[132400]: kubelet I0213 04:05:45.605352 132400 apiserver.go:52] "Watching apiserver" Feb 13 04:05:46 localhost.localdomain microshift[132400]: kubelet E0213 04:05:46.166468 132400 kubelet.go:2157] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 04:05:47 localhost.localdomain microshift[132400]: kubelet E0213 04:05:47.766999 132400 kubelet.go:2157] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 04:05:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:48.286910 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:05:49 localhost.localdomain microshift[132400]: kubelet I0213 04:05:49.590151 132400 kubelet.go:153] kubelet is ready Feb 13 04:05:49 localhost.localdomain microshift[132400]: ??? I0213 04:05:49.590185 132400 run.go:140] MicroShift is ready Feb 13 04:05:49 localhost.localdomain systemd[1]: Started MicroShift. Feb 13 04:05:49 localhost.localdomain microshift[132400]: ??? I0213 04:05:49.591209 132400 run.go:145] sent sd_notify readiness message Feb 13 04:05:49 localhost.localdomain microshift[132400]: kubelet I0213 04:05:49.598493 132400 manager.go:281] "Starting Device Plugin manager" Feb 13 04:05:49 localhost.localdomain microshift[132400]: kubelet I0213 04:05:49.598531 132400 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 04:05:49 localhost.localdomain microshift[132400]: kubelet I0213 04:05:49.598539 132400 server.go:79] "Starting device plugin registration server" Feb 13 04:05:49 localhost.localdomain microshift[132400]: kubelet I0213 04:05:49.601554 132400 plugin_watcher.go:52] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 13 04:05:49 localhost.localdomain microshift[132400]: kubelet I0213 04:05:49.601648 132400 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 13 04:05:49 localhost.localdomain microshift[132400]: kubelet I0213 04:05:49.601654 132400 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 04:05:49 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:49.728190 132400 node_lifecycle_controller.go:909] Node localhost.localdomain is NotReady as of 2023-02-13 04:05:49.72818187 -0500 EST m=+36.908528138. Adding it to the Taint queue. Feb 13 04:05:49 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:49.728329 132400 node_lifecycle_controller.go:1204] Controller detected that all Nodes are not-Ready. Entering master disruption mode. Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.967364 132400 kubelet.go:2219] "SyncLoop ADD" source="api" pods="[openshift-dns/node-resolver-sgsm4 openshift-ingress/router-default-85d64c4987-bbdnr openshift-ovn-kubernetes/ovnkube-master-86mcc openshift-ovn-kubernetes/ovnkube-node-6gpbh openshift-service-ca/service-ca-7bd9547b57-vhmkf openshift-storage/topolvm-controller-78cbfc4867-qdfs4 openshift-storage/topolvm-node-9bnp5 openshift-dns/dns-default-z4v2p]" Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.967745 132400 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.967862 132400 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.967915 132400 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.967964 132400 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.969018 132400 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.969081 132400 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.969102 132400 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.969136 132400 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996217 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sgsm4" event=&{ID:c608b4f5-e1d8-4927-9659-5771e2bd21ac Type:ContainerDied Data:9b6d8a32ca92f23632ee5e9b6ba757dda187d1eb44ad57b68f47afebaf9d8728} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996558 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sgsm4" event=&{ID:c608b4f5-e1d8-4927-9659-5771e2bd21ac Type:ContainerStarted Data:863fc7c024be45beb72564a4df54f437be81a6123b43f4fd077543ae1d0bb404} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996572 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" event=&{ID:0390852d-4e2a-4c00-9b0f-cbf1945008a2 Type:ContainerDied Data:a257774a21f8c38279f035d34f1e830a5944dccd0853d98f98cbbe2b7862176e} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996586 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" event=&{ID:0390852d-4e2a-4c00-9b0f-cbf1945008a2 Type:ContainerStarted Data:e0b320a743f618a892d007f259fe7a5957f8a5e6a326179dabdc87431a85ba52} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996593 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-85d64c4987-bbdnr" event=&{ID:41b0089d-73d0-450a-84f5-8bfec82d97f9 Type:ContainerDied Data:e5f3457666c38518369411a2705160dbd818b3ad18473d1ee90d666928dbd8d5} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996601 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-85d64c4987-bbdnr" event=&{ID:41b0089d-73d0-450a-84f5-8bfec82d97f9 Type:ContainerStarted Data:9891b8580a56e81042d26522e66b721e32b7492910ac67262124dfe879712a41} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996618 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:302aa47ff46bb2d6b45572d0d1cf955fa76bcf2ceb0ffe9556913a63a67cceba} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996625 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:256bc3b44bcaf0961e036a129772ecb541572ae9099fea3714a9ee5ee89327de} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996629 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:9ee30baebfe0e235ffeed4baf7789c79901e924878d900552badae4730ef7bcc} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996638 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:1575137a44aa2b36604a81c6790648f4e1d8da769f62cc3a71df04b8196af660} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996648 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:97a2a926204ad7c917872849ebc0e449918a10cd5c1bfc93f4f29381a484171e} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996653 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:d78ff638d4994bc8a2c44d321b43796ab1d3b3d0cc962d09aa36f9ae2f00db80} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996669 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:7e387da48f96a4b09e4f1b967bf7bfec98e906343b0b6994358216503c15441f} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996678 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:b94cda537d9841d96f436db8d79a8e3c42a65f0e5ee07e67c48c22aaa963c51c} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996683 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:f255f4eb69604c1e6bf50f06cf96a70fe880978735334642d0ed181376f9111f} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996689 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:97352d67f478b1d1c8cd5883527175619c9af710caaace61e1ed9a8c7db3eb97} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996695 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:28a23b4dfe4e59e0c01a5e229a51deae1a8c2d130eb30f1367867e0c024b828b} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996700 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:4d9a087ce5c7df4274efa026c3ed47959db4974177b5dc845636bd745df49f3f} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996707 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:9ea8319697af0764b9e431751b819b8e571bd2a58f7e7c706249c359ac413df9} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996719 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:3f704046b0eada00a6ebbf6378cebd6808ad5309eee21cefcb74331a00ea822f} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996725 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:7349fc23698f553ff5489f20d2454801ebbe0dc758fc306ecf0a8498469dabdb} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996731 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:a3204dbcdf71733932b6a8aa25e9862e76bc7e8fb6ff481d6ec7066d415ffa1f} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996741 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:1260a84b52cd02191f25d30e7feacf38be3841afa9297a6b7d488a7f2f3a868b} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996752 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" event=&{ID:0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc Type:ContainerDied Data:20535f54100b0f7f6e34e5d0ec94424dbb95ea06e69ce637950770f3db6f820e} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996758 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" event=&{ID:0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc Type:ContainerDied Data:3165b5e479fa48097d83984fec120a1f040ab983b8b6306196d243e6719d6a9d} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996763 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" event=&{ID:0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc Type:ContainerDied Data:6c9d73c5e0390892a9c786fe39100f9747806ab28f666c35778e72923fb5c757} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996768 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" event=&{ID:0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc Type:ContainerDied Data:d3ba1fdb0d00bff15c239ea3ece72cc195b33b14da8b025f38a2a7e130e5f05e} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet I0213 04:05:50.996776 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" event=&{ID:0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc Type:ContainerStarted Data:de8c287d21d95b259958f94d9513e73c34740aa654c588be1cdc86b08a7989f8} Feb 13 04:05:50 localhost.localdomain microshift[132400]: kubelet W0213 04:05:50.997792 132400 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice: no such file or directory Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.015999 132400 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet W0213 04:05:51.022653 132400 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice: no such file or directory Feb 13 04:05:51 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:51.045022 132400 controller.go:615] quota admission added evaluator for: endpoints Feb 13 04:05:51 localhost.localdomain microshift[132400]: kube-apiserver I0213 04:05:51.046828 132400 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet W0213 04:05:51.047537 132400 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5abdcaf_5a6a_4845_8ad6_12ad1caadfa7.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5abdcaf_5a6a_4845_8ad6_12ad1caadfa7.slice: no such file or directory Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.052819 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-run-openvswitch\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.052967 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-node-log\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053032 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9744aca6-9463-42d2-a05e-f1e3af7b175e-socket-dir\") pod \"topolvm-controller-78cbfc4867-qdfs4\" (UID: \"9744aca6-9463-42d2-a05e-f1e3af7b175e\") " pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053079 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gs8j\" (UniqueName: \"kubernetes.io/projected/c608b4f5-e1d8-4927-9659-5771e2bd21ac-kube-api-access-4gs8j\") pod \"node-resolver-sgsm4\" (UID: \"c608b4f5-e1d8-4927-9659-5771e2bd21ac\") " pod="openshift-dns/node-resolver-sgsm4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053106 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-var-lib-openvswitch\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053134 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-run-openvswitch\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053147 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch-node\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-etc-openvswitch-node\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053158 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-node-log\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053169 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-cni-netd\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053180 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbckr\" (UniqueName: \"kubernetes.io/projected/9744aca6-9463-42d2-a05e-f1e3af7b175e-kube-api-access-gbckr\") pod \"topolvm-controller-78cbfc4867-qdfs4\" (UID: \"9744aca6-9463-42d2-a05e-f1e3af7b175e\") " pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053191 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-log-socket\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053209 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0390852d-4e2a-4c00-9b0f-cbf1945008a2-env-overrides\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053222 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjk85\" (UniqueName: \"kubernetes.io/projected/763e920a-b594-4485-bf77-dfed5dddbf03-kube-api-access-sjk85\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053233 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053244 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-run-netns\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053255 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5x8k\" (UniqueName: \"kubernetes.io/projected/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-kube-api-access-n5x8k\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053276 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-run-ovn\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053290 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-csi-plugin-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053300 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/763e920a-b594-4485-bf77-dfed5dddbf03-lvmd-socket-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053310 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2e7bce65-b199-4d8a-bc2f-c63494419251-signing-key\") pod \"service-ca-7bd9547b57-vhmkf\" (UID: \"2e7bce65-b199-4d8a-bc2f-c63494419251\") " pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053324 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghk5h\" (UniqueName: \"kubernetes.io/projected/2e7bce65-b199-4d8a-bc2f-c63494419251-kube-api-access-ghk5h\") pod \"service-ca-7bd9547b57-vhmkf\" (UID: \"2e7bce65-b199-4d8a-bc2f-c63494419251\") " pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053336 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-kubeconfig\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053361 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-etc-openvswitch\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053379 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/empty-dir/9744aca6-9463-42d2-a05e-f1e3af7b175e-certs\") pod \"topolvm-controller-78cbfc4867-qdfs4\" (UID: \"9744aca6-9463-42d2-a05e-f1e3af7b175e\") " pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053398 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c608b4f5-e1d8-4927-9659-5771e2bd21ac-hosts-file\") pod \"node-resolver-sgsm4\" (UID: \"c608b4f5-e1d8-4927-9659-5771e2bd21ac\") " pod="openshift-dns/node-resolver-sgsm4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053410 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-systemd-units\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053425 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config-dir\" (UniqueName: \"kubernetes.io/configmap/763e920a-b594-4485-bf77-dfed5dddbf03-lvmd-config-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053435 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-node-plugin-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053445 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7-metrics-tls\") pod \"dns-default-z4v2p\" (UID: \"d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7\") " pod="openshift-dns/dns-default-z4v2p" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053457 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/41b0089d-73d0-450a-84f5-8bfec82d97f9-default-certificate\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053467 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-run-ovn\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053477 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-cni-bin\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053494 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9d8p\" (UniqueName: \"kubernetes.io/projected/0390852d-4e2a-4c00-9b0f-cbf1945008a2-kube-api-access-q9d8p\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053505 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gtpr\" (UniqueName: \"kubernetes.io/projected/41b0089d-73d0-450a-84f5-8bfec82d97f9-kube-api-access-5gtpr\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053515 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2e7bce65-b199-4d8a-bc2f-c63494419251-signing-cabundle\") pod \"service-ca-7bd9547b57-vhmkf\" (UID: \"2e7bce65-b199-4d8a-bc2f-c63494419251\") " pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053526 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-run-ovn-kubernetes\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053535 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-env-overrides\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053545 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgssb\" (UniqueName: \"kubernetes.io/projected/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7-kube-api-access-qgssb\") pod \"dns-default-z4v2p\" (UID: \"d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7\") " pod="openshift-dns/dns-default-z4v2p" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053556 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-pod-volumes-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053571 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7-config-volume\") pod \"dns-default-z4v2p\" (UID: \"d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7\") " pod="openshift-dns/dns-default-z4v2p" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053581 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-slash\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053591 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-log-socket\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053603 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-ovnkube-config\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053618 132400 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-registration-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.053626 132400 reconciler.go:41] "Reconciler: start to sync state" Feb 13 04:05:51 localhost.localdomain microshift[132400]: route-controller-manager I0213 04:05:51.129975 132400 log.go:198] http: TLS handshake error from 127.0.0.1:55742: EOF Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155349 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-gbckr\" (UniqueName: \"kubernetes.io/projected/9744aca6-9463-42d2-a05e-f1e3af7b175e-kube-api-access-gbckr\") pod \"topolvm-controller-78cbfc4867-qdfs4\" (UID: \"9744aca6-9463-42d2-a05e-f1e3af7b175e\") " pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155497 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-run-openvswitch\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155534 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"etc-openvswitch-node\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-etc-openvswitch-node\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155565 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-node-log\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155593 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-cni-netd\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155632 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-run-ovn\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155678 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-log-socket\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155719 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0390852d-4e2a-4c00-9b0f-cbf1945008a2-env-overrides\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155751 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-sjk85\" (UniqueName: \"kubernetes.io/projected/763e920a-b594-4485-bf77-dfed5dddbf03-kube-api-access-sjk85\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155782 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155816 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-run-netns\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155845 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-n5x8k\" (UniqueName: \"kubernetes.io/projected/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-kube-api-access-n5x8k\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155875 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-etc-openvswitch\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155906 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-csi-plugin-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155937 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"lvmd-socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/763e920a-b594-4485-bf77-dfed5dddbf03-lvmd-socket-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.155972 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2e7bce65-b199-4d8a-bc2f-c63494419251-signing-key\") pod \"service-ca-7bd9547b57-vhmkf\" (UID: \"2e7bce65-b199-4d8a-bc2f-c63494419251\") " pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156224 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-ghk5h\" (UniqueName: \"kubernetes.io/projected/2e7bce65-b199-4d8a-bc2f-c63494419251-kube-api-access-ghk5h\") pod \"service-ca-7bd9547b57-vhmkf\" (UID: \"2e7bce65-b199-4d8a-bc2f-c63494419251\") " pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156311 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-kubeconfig\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156345 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"lvmd-config-dir\" (UniqueName: \"kubernetes.io/configmap/763e920a-b594-4485-bf77-dfed5dddbf03-lvmd-config-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156366 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-run-openvswitch\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156397 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/empty-dir/9744aca6-9463-42d2-a05e-f1e3af7b175e-certs\") pod \"topolvm-controller-78cbfc4867-qdfs4\" (UID: \"9744aca6-9463-42d2-a05e-f1e3af7b175e\") " pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156416 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-node-log\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156432 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-cni-netd\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156447 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-run-ovn\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156467 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-log-socket\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156493 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c608b4f5-e1d8-4927-9659-5771e2bd21ac-hosts-file\") pod \"node-resolver-sgsm4\" (UID: \"c608b4f5-e1d8-4927-9659-5771e2bd21ac\") " pod="openshift-dns/node-resolver-sgsm4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156522 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-systemd-units\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156556 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-q9d8p\" (UniqueName: \"kubernetes.io/projected/0390852d-4e2a-4c00-9b0f-cbf1945008a2-kube-api-access-q9d8p\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156588 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-node-plugin-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156632 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7-metrics-tls\") pod \"dns-default-z4v2p\" (UID: \"d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7\") " pod="openshift-dns/dns-default-z4v2p" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156672 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/41b0089d-73d0-450a-84f5-8bfec82d97f9-default-certificate\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156705 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-run-ovn\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156734 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-cni-bin\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156762 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-qgssb\" (UniqueName: \"kubernetes.io/projected/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7-kube-api-access-qgssb\") pod \"dns-default-z4v2p\" (UID: \"d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7\") " pod="openshift-dns/dns-default-z4v2p" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156797 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-5gtpr\" (UniqueName: \"kubernetes.io/projected/41b0089d-73d0-450a-84f5-8bfec82d97f9-kube-api-access-5gtpr\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156829 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2e7bce65-b199-4d8a-bc2f-c63494419251-signing-cabundle\") pod \"service-ca-7bd9547b57-vhmkf\" (UID: \"2e7bce65-b199-4d8a-bc2f-c63494419251\") " pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156859 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-run-ovn-kubernetes\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156885 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-env-overrides\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156914 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-registration-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156944 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-pod-volumes-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156973 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7-config-volume\") pod \"dns-default-z4v2p\" (UID: \"d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7\") " pod="openshift-dns/dns-default-z4v2p" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157002 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-slash\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157034 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-log-socket\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157063 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-ovnkube-config\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157096 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-var-lib-openvswitch\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157127 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-run-openvswitch\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157248 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-node-log\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157293 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9744aca6-9463-42d2-a05e-f1e3af7b175e-socket-dir\") pod \"topolvm-controller-78cbfc4867-qdfs4\" (UID: \"9744aca6-9463-42d2-a05e-f1e3af7b175e\") " pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157326 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-4gs8j\" (UniqueName: \"kubernetes.io/projected/c608b4f5-e1d8-4927-9659-5771e2bd21ac-kube-api-access-4gs8j\") pod \"node-resolver-sgsm4\" (UID: \"c608b4f5-e1d8-4927-9659-5771e2bd21ac\") " pod="openshift-dns/node-resolver-sgsm4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet E0213 04:05:51.157357 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:05:51.657345146 -0500 EST m=+38.837691432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157454 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-run-netns\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157535 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-etc-openvswitch\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157564 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-csi-plugin-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157604 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"lvmd-socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/763e920a-b594-4485-bf77-dfed5dddbf03-lvmd-socket-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157845 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2e7bce65-b199-4d8a-bc2f-c63494419251-signing-key\") pod \"service-ca-7bd9547b57-vhmkf\" (UID: \"2e7bce65-b199-4d8a-bc2f-c63494419251\") " pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156225 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0390852d-4e2a-4c00-9b0f-cbf1945008a2-env-overrides\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.157969 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-kubeconfig\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.158041 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"lvmd-config-dir\" (UniqueName: \"kubernetes.io/configmap/763e920a-b594-4485-bf77-dfed5dddbf03-lvmd-config-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.158070 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/empty-dir/9744aca6-9463-42d2-a05e-f1e3af7b175e-certs\") pod \"topolvm-controller-78cbfc4867-qdfs4\" (UID: \"9744aca6-9463-42d2-a05e-f1e3af7b175e\") " pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.156403 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc-openvswitch-node\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-etc-openvswitch-node\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.158097 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c608b4f5-e1d8-4927-9659-5771e2bd21ac-hosts-file\") pod \"node-resolver-sgsm4\" (UID: \"c608b4f5-e1d8-4927-9659-5771e2bd21ac\") " pod="openshift-dns/node-resolver-sgsm4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.158115 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-systemd-units\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.158223 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-node-plugin-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.158415 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7-metrics-tls\") pod \"dns-default-z4v2p\" (UID: \"d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7\") " pod="openshift-dns/dns-default-z4v2p" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.158701 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/41b0089d-73d0-450a-84f5-8bfec82d97f9-default-certificate\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.158757 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-run-ovn\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.158797 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-cni-bin\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159152 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2e7bce65-b199-4d8a-bc2f-c63494419251-signing-cabundle\") pod \"service-ca-7bd9547b57-vhmkf\" (UID: \"2e7bce65-b199-4d8a-bc2f-c63494419251\") " pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159213 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-run-ovn-kubernetes\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159295 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-env-overrides\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159349 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-registration-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159388 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/763e920a-b594-4485-bf77-dfed5dddbf03-pod-volumes-dir\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159469 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7-config-volume\") pod \"dns-default-z4v2p\" (UID: \"d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7\") " pod="openshift-dns/dns-default-z4v2p" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159508 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-host-slash\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159544 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-log-socket\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159638 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-ovnkube-config\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159687 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-var-lib-openvswitch\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159725 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-run-openvswitch\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159757 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0390852d-4e2a-4c00-9b0f-cbf1945008a2-node-log\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.159823 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9744aca6-9463-42d2-a05e-f1e3af7b175e-socket-dir\") pod \"topolvm-controller-78cbfc4867-qdfs4\" (UID: \"9744aca6-9463-42d2-a05e-f1e3af7b175e\") " pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.214639 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgssb\" (UniqueName: \"kubernetes.io/projected/d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7-kube-api-access-qgssb\") pod \"dns-default-z4v2p\" (UID: \"d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7\") " pod="openshift-dns/dns-default-z4v2p" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.215025 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjk85\" (UniqueName: \"kubernetes.io/projected/763e920a-b594-4485-bf77-dfed5dddbf03-kube-api-access-sjk85\") pod \"topolvm-node-9bnp5\" (UID: \"763e920a-b594-4485-bf77-dfed5dddbf03\") " pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.215303 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbckr\" (UniqueName: \"kubernetes.io/projected/9744aca6-9463-42d2-a05e-f1e3af7b175e-kube-api-access-gbckr\") pod \"topolvm-controller-78cbfc4867-qdfs4\" (UID: \"9744aca6-9463-42d2-a05e-f1e3af7b175e\") " pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.215689 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gtpr\" (UniqueName: \"kubernetes.io/projected/41b0089d-73d0-450a-84f5-8bfec82d97f9-kube-api-access-5gtpr\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.219057 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5x8k\" (UniqueName: \"kubernetes.io/projected/0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc-kube-api-access-n5x8k\") pod \"ovnkube-master-86mcc\" (UID: \"0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc\") " pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.219286 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gs8j\" (UniqueName: \"kubernetes.io/projected/c608b4f5-e1d8-4927-9659-5771e2bd21ac-kube-api-access-4gs8j\") pod \"node-resolver-sgsm4\" (UID: \"c608b4f5-e1d8-4927-9659-5771e2bd21ac\") " pod="openshift-dns/node-resolver-sgsm4" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.219394 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9d8p\" (UniqueName: \"kubernetes.io/projected/0390852d-4e2a-4c00-9b0f-cbf1945008a2-kube-api-access-q9d8p\") pod \"ovnkube-node-6gpbh\" (UID: \"0390852d-4e2a-4c00-9b0f-cbf1945008a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.219555 132400 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghk5h\" (UniqueName: \"kubernetes.io/projected/2e7bce65-b199-4d8a-bc2f-c63494419251-kube-api-access-ghk5h\") pod \"service-ca-7bd9547b57-vhmkf\" (UID: \"2e7bce65-b199-4d8a-bc2f-c63494419251\") " pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.303257 132400 scope.go:115] "RemoveContainer" containerID="9b6d8a32ca92f23632ee5e9b6ba757dda187d1eb44ad57b68f47afebaf9d8728" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.310263 132400 scope.go:115] "RemoveContainer" containerID="1575137a44aa2b36604a81c6790648f4e1d8da769f62cc3a71df04b8196af660" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.310279 132400 scope.go:115] "RemoveContainer" containerID="9ee30baebfe0e235ffeed4baf7789c79901e924878d900552badae4730ef7bcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.310283 132400 scope.go:115] "RemoveContainer" containerID="256bc3b44bcaf0961e036a129772ecb541572ae9099fea3714a9ee5ee89327de" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.310287 132400 scope.go:115] "RemoveContainer" containerID="302aa47ff46bb2d6b45572d0d1cf955fa76bcf2ceb0ffe9556913a63a67cceba" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.315349 132400 scope.go:115] "RemoveContainer" containerID="7e387da48f96a4b09e4f1b967bf7bfec98e906343b0b6994358216503c15441f" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.324703 132400 scope.go:115] "RemoveContainer" containerID="d3ba1fdb0d00bff15c239ea3ece72cc195b33b14da8b025f38a2a7e130e5f05e" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.324721 132400 scope.go:115] "RemoveContainer" containerID="6c9d73c5e0390892a9c786fe39100f9747806ab28f666c35778e72923fb5c757" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.324725 132400 scope.go:115] "RemoveContainer" containerID="3165b5e479fa48097d83984fec120a1f040ab983b8b6306196d243e6719d6a9d" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.324730 132400 scope.go:115] "RemoveContainer" containerID="20535f54100b0f7f6e34e5d0ec94424dbb95ea06e69ce637950770f3db6f820e" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.324907 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.332323 132400 scope.go:115] "RemoveContainer" containerID="a257774a21f8c38279f035d34f1e830a5944dccd0853d98f98cbbe2b7862176e" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.341972 132400 scope.go:115] "RemoveContainer" containerID="4d9a087ce5c7df4274efa026c3ed47959db4974177b5dc845636bd745df49f3f" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.341987 132400 scope.go:115] "RemoveContainer" containerID="28a23b4dfe4e59e0c01a5e229a51deae1a8c2d130eb30f1367867e0c024b828b" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.341992 132400 scope.go:115] "RemoveContainer" containerID="97352d67f478b1d1c8cd5883527175619c9af710caaace61e1ed9a8c7db3eb97" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.341996 132400 scope.go:115] "RemoveContainer" containerID="f255f4eb69604c1e6bf50f06cf96a70fe880978735334642d0ed181376f9111f" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.343857 132400 scope.go:115] "RemoveContainer" containerID="a3204dbcdf71733932b6a8aa25e9862e76bc7e8fb6ff481d6ec7066d415ffa1f" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.343871 132400 scope.go:115] "RemoveContainer" containerID="7349fc23698f553ff5489f20d2454801ebbe0dc758fc306ecf0a8498469dabdb" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.345950 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.666614 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet E0213 04:05:51.666932 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:05:52.666919456 -0500 EST m=+39.847265725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.796502 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/ovnkube-master/2.log" Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.801627 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" event=&{ID:0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc Type:ContainerStarted Data:db9edeb383fd96562cef6b3283b6c05ab6078715ff61323866ef2a2a2998adf5} Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.803465 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sgsm4" event=&{ID:c608b4f5-e1d8-4927-9659-5771e2bd21ac Type:ContainerStarted Data:1cd2ca1449e5a669c8ef6af3c36102bd6527afc1dd60556b7e61c90589523315} Feb 13 04:05:51 localhost.localdomain microshift[132400]: kubelet I0213 04:05:51.922170 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:eb94e774151d72676fa45a79264e41379083829bd0b547474e9a96138dc14075} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.677651 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet E0213 04:05:52.677774 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:05:54.677760731 -0500 EST m=+41.858106999 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.924269 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:4ee3570583272694abe64187c41f0ceee4cec87ed291672ed6430f399ffe082f} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.925783 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:cdd44d18fe75ab4ad06463eea45b283eeec4cd372acd2aae7aa1dd0fb6b95762} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.925858 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:3eb2dcf5a4fb3bed48b9dc8e16d843a09fa7df662a2133343af2f3463a613c9d} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.925887 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:40b69396397e2328dd9bc2fcd790e8ccb4ef038e976e1f365ad186a81f596efb} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.927540 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:4ca318be69a33e822d971a22851c7d889959a3b61319d77fe6aad3962f7eee44} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.927554 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:9ccf6bfc9d6e828eee30e1faf64416af0072c2dd74b6ed6d27acb82684066655} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.927688 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.928733 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/ovnkube-master/2.log" Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.929082 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" event=&{ID:0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc Type:ContainerStarted Data:f177be15e0288672ea2dbeb13fb95d9084f8df64dc932da9015a3f8ed276a58d} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.929094 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" event=&{ID:0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc Type:ContainerStarted Data:f8e2769490426fa34754d283a7fd0de536e0eb47dd23862daf8f6dd004d99078} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.929565 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6gpbh_0390852d-4e2a-4c00-9b0f-cbf1945008a2/ovn-controller/2.log" Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.929638 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6gpbh" event=&{ID:0390852d-4e2a-4c00-9b0f-cbf1945008a2 Type:ContainerStarted Data:c34e60fff312b4c0635e761e5635804aee540f86b29de738e28b57e449a87950} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.931149 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:a3fc844f08a83248a30b2c8a8beb9703f1a6a2d1eb9bf6fb94a946be9c510bda} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.931167 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:f09b8082c7856eab6dbfe430b919355faadcd2b89b57ee546dd043b8cb91dd61} Feb 13 04:05:52 localhost.localdomain microshift[132400]: kubelet I0213 04:05:52.931173 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:8eef2169e37b79da4482316ee843793bed52fde3be76eba091d7b93d63e58b8b} Feb 13 04:05:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:53.286525 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:05:53 localhost.localdomain microshift[132400]: kubelet I0213 04:05:53.934337 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/ovnkube-master/2.log" Feb 13 04:05:53 localhost.localdomain microshift[132400]: kubelet I0213 04:05:53.934883 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" event=&{ID:0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc Type:ContainerStarted Data:f41a47867216419d533581465db81b38683e4bf9c10e1619892f47c3f82619b5} Feb 13 04:05:53 localhost.localdomain microshift[132400]: kubelet I0213 04:05:53.935945 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:53 localhost.localdomain microshift[132400]: kubelet I0213 04:05:53.938621 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:d6dfaf9d8719a2143467af320617c346aef0074a719f9c9221511c832e47f262} Feb 13 04:05:54 localhost.localdomain microshift[132400]: kubelet I0213 04:05:54.690967 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:54 localhost.localdomain microshift[132400]: kubelet E0213 04:05:54.691074 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:05:58.691065645 -0500 EST m=+45.871411912 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:05:54 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:54.728457 132400 node_lifecycle_controller.go:909] Node localhost.localdomain is NotReady as of 2023-02-13 04:05:54.728448536 -0500 EST m=+41.908794805. Adding it to the Taint queue. Feb 13 04:05:54 localhost.localdomain microshift[132400]: kubelet I0213 04:05:54.939797 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 04:05:54 localhost.localdomain microshift[132400]: kubelet I0213 04:05:54.939797 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 04:05:54 localhost.localdomain microshift[132400]: kubelet I0213 04:05:54.940162 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 04:05:54 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:05:54.940586 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:54 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:05:54.940602 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:05:55 localhost.localdomain microshift[132400]: kubelet I0213 04:05:55.025872 132400 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeReady" Feb 13 04:05:55 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:55.042693 132400 topologycache.go:212] Ignoring node localhost.localdomain because it has an excluded label Feb 13 04:05:55 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:55.042831 132400 topologycache.go:248] Insufficient node info for topology hints (0 zones, 0 CPU, true) Feb 13 04:05:55 localhost.localdomain microshift[132400]: kubelet I0213 04:05:55.653870 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:55 localhost.localdomain microshift[132400]: kubelet I0213 04:05:55.705528 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:55 localhost.localdomain microshift[132400]: kubelet I0213 04:05:55.791003 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:55 localhost.localdomain microshift[132400]: kubelet I0213 04:05:55.834371 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:05:57 localhost.localdomain microshift[132400]: kubelet I0213 04:05:57.478745 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:05:57 localhost.localdomain microshift[132400]: kubelet I0213 04:05:57.478822 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 04:05:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:05:58.286202 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:05:58 localhost.localdomain microshift[132400]: kubelet I0213 04:05:58.478487 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:05:58 localhost.localdomain microshift[132400]: kubelet I0213 04:05:58.478796 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:05:58 localhost.localdomain microshift[132400]: kubelet I0213 04:05:58.716928 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:05:58 localhost.localdomain microshift[132400]: kubelet E0213 04:05:58.717104 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:06:06.717083683 -0500 EST m=+53.897429999 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:05:59 localhost.localdomain microshift[132400]: kubelet I0213 04:05:59.479733 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:05:59 localhost.localdomain microshift[132400]: kubelet I0213 04:05:59.479965 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:05:59 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:59.728823 132400 node_lifecycle_controller.go:933] Node localhost.localdomain is healthy again, removing all taints Feb 13 04:05:59 localhost.localdomain microshift[132400]: kube-controller-manager I0213 04:05:59.729029 132400 node_lifecycle_controller.go:1231] Controller detected that some Nodes are Ready. Exiting master disruption mode. Feb 13 04:06:01 localhost.localdomain microshift[132400]: kubelet I0213 04:06:01.377468 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-master-86mcc" Feb 13 04:06:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:03.287121 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:03 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:06:03.576776 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:06:03 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:06:03.576806 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:06:06 localhost.localdomain microshift[132400]: kubelet I0213 04:06:06.347336 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:06 localhost.localdomain microshift[132400]: kubelet I0213 04:06:06.347382 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:06 localhost.localdomain microshift[132400]: kubelet I0213 04:06:06.768716 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:06:06 localhost.localdomain microshift[132400]: kubelet E0213 04:06:06.768853 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:06:22.768842531 -0500 EST m=+69.949188799 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:06:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:08.287007 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:08 localhost.localdomain microshift[132400]: kubelet I0213 04:06:08.479445 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:08 localhost.localdomain microshift[132400]: kubelet I0213 04:06:08.479628 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:09 localhost.localdomain microshift[132400]: kubelet I0213 04:06:09.347653 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:09 localhost.localdomain microshift[132400]: kubelet I0213 04:06:09.348014 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:10 localhost.localdomain microshift[132400]: kubelet I0213 04:06:10.964072 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="40b69396397e2328dd9bc2fcd790e8ccb4ef038e976e1f365ad186a81f596efb" exitCode=1 Feb 13 04:06:10 localhost.localdomain microshift[132400]: kubelet I0213 04:06:10.964465 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:40b69396397e2328dd9bc2fcd790e8ccb4ef038e976e1f365ad186a81f596efb} Feb 13 04:06:10 localhost.localdomain microshift[132400]: kubelet I0213 04:06:10.964519 132400 scope.go:115] "RemoveContainer" containerID="28a23b4dfe4e59e0c01a5e229a51deae1a8c2d130eb30f1367867e0c024b828b" Feb 13 04:06:10 localhost.localdomain microshift[132400]: kubelet I0213 04:06:10.964827 132400 scope.go:115] "RemoveContainer" containerID="40b69396397e2328dd9bc2fcd790e8ccb4ef038e976e1f365ad186a81f596efb" Feb 13 04:06:10 localhost.localdomain microshift[132400]: kubelet E0213 04:06:10.965125 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:06:10 localhost.localdomain microshift[132400]: kubelet I0213 04:06:10.969632 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="8eef2169e37b79da4482316ee843793bed52fde3be76eba091d7b93d63e58b8b" exitCode=1 Feb 13 04:06:10 localhost.localdomain microshift[132400]: kubelet I0213 04:06:10.969731 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:8eef2169e37b79da4482316ee843793bed52fde3be76eba091d7b93d63e58b8b} Feb 13 04:06:10 localhost.localdomain microshift[132400]: kubelet I0213 04:06:10.969972 132400 scope.go:115] "RemoveContainer" containerID="8eef2169e37b79da4482316ee843793bed52fde3be76eba091d7b93d63e58b8b" Feb 13 04:06:10 localhost.localdomain microshift[132400]: kubelet E0213 04:06:10.970257 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:06:11 localhost.localdomain microshift[132400]: kubelet I0213 04:06:11.043000 132400 scope.go:115] "RemoveContainer" containerID="1575137a44aa2b36604a81c6790648f4e1d8da769f62cc3a71df04b8196af660" Feb 13 04:06:12 localhost.localdomain microshift[132400]: kubelet I0213 04:06:12.348162 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:12 localhost.localdomain microshift[132400]: kubelet I0213 04:06:12.348499 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:13.287140 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:15 localhost.localdomain microshift[132400]: kubelet I0213 04:06:15.349215 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:15 localhost.localdomain microshift[132400]: kubelet I0213 04:06:15.349600 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:18.287070 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:18 localhost.localdomain microshift[132400]: kubelet I0213 04:06:18.350296 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:18 localhost.localdomain microshift[132400]: kubelet I0213 04:06:18.350344 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:20 localhost.localdomain microshift[132400]: kubelet I0213 04:06:20.901269 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:06:20 localhost.localdomain microshift[132400]: kubelet I0213 04:06:20.902043 132400 scope.go:115] "RemoveContainer" containerID="40b69396397e2328dd9bc2fcd790e8ccb4ef038e976e1f365ad186a81f596efb" Feb 13 04:06:21 localhost.localdomain microshift[132400]: kubelet I0213 04:06:21.351384 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:21 localhost.localdomain microshift[132400]: kubelet I0213 04:06:21.351425 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:21 localhost.localdomain microshift[132400]: kubelet I0213 04:06:21.993083 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:ee55dfbb9ea616c72ded53f696bce24b5a06770506900f4cd969d73091ac5d22} Feb 13 04:06:22 localhost.localdomain microshift[132400]: kubelet I0213 04:06:22.773964 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:06:22 localhost.localdomain microshift[132400]: kubelet E0213 04:06:22.774121 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:06:54.774106195 -0500 EST m=+101.954452479 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:06:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:23.286964 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:24 localhost.localdomain microshift[132400]: kubelet I0213 04:06:24.352575 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:24 localhost.localdomain microshift[132400]: kubelet I0213 04:06:24.352622 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:24 localhost.localdomain microshift[132400]: kubelet I0213 04:06:24.664591 132400 scope.go:115] "RemoveContainer" containerID="8eef2169e37b79da4482316ee843793bed52fde3be76eba091d7b93d63e58b8b" Feb 13 04:06:24 localhost.localdomain microshift[132400]: kubelet I0213 04:06:24.999700 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="ee55dfbb9ea616c72ded53f696bce24b5a06770506900f4cd969d73091ac5d22" exitCode=1 Feb 13 04:06:24 localhost.localdomain microshift[132400]: kubelet I0213 04:06:24.999764 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:ee55dfbb9ea616c72ded53f696bce24b5a06770506900f4cd969d73091ac5d22} Feb 13 04:06:24 localhost.localdomain microshift[132400]: kubelet I0213 04:06:24.999912 132400 scope.go:115] "RemoveContainer" containerID="40b69396397e2328dd9bc2fcd790e8ccb4ef038e976e1f365ad186a81f596efb" Feb 13 04:06:25 localhost.localdomain microshift[132400]: kubelet I0213 04:06:25.000147 132400 scope.go:115] "RemoveContainer" containerID="ee55dfbb9ea616c72ded53f696bce24b5a06770506900f4cd969d73091ac5d22" Feb 13 04:06:25 localhost.localdomain microshift[132400]: kubelet E0213 04:06:25.000397 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:06:25 localhost.localdomain microshift[132400]: kubelet I0213 04:06:25.006477 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:5e44942d974bda5f4336fc763c07b5b55fe0b97f69337c87ebc48f0c4b0a6294} Feb 13 04:06:25 localhost.localdomain microshift[132400]: kubelet I0213 04:06:25.006902 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:06:26 localhost.localdomain microshift[132400]: kubelet I0213 04:06:26.007322 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:26 localhost.localdomain microshift[132400]: kubelet I0213 04:06:26.007370 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:27 localhost.localdomain microshift[132400]: kubelet I0213 04:06:27.011037 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:27 localhost.localdomain microshift[132400]: kubelet I0213 04:06:27.011078 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:27 localhost.localdomain microshift[132400]: kubelet I0213 04:06:27.353239 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:27 localhost.localdomain microshift[132400]: kubelet I0213 04:06:27.353279 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:28 localhost.localdomain microshift[132400]: kubelet I0213 04:06:28.014521 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="5e44942d974bda5f4336fc763c07b5b55fe0b97f69337c87ebc48f0c4b0a6294" exitCode=1 Feb 13 04:06:28 localhost.localdomain microshift[132400]: kubelet I0213 04:06:28.014547 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:5e44942d974bda5f4336fc763c07b5b55fe0b97f69337c87ebc48f0c4b0a6294} Feb 13 04:06:28 localhost.localdomain microshift[132400]: kubelet I0213 04:06:28.014567 132400 scope.go:115] "RemoveContainer" containerID="8eef2169e37b79da4482316ee843793bed52fde3be76eba091d7b93d63e58b8b" Feb 13 04:06:28 localhost.localdomain microshift[132400]: kubelet I0213 04:06:28.014815 132400 scope.go:115] "RemoveContainer" containerID="5e44942d974bda5f4336fc763c07b5b55fe0b97f69337c87ebc48f0c4b0a6294" Feb 13 04:06:28 localhost.localdomain microshift[132400]: kubelet E0213 04:06:28.015080 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:06:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:28.286809 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:28 localhost.localdomain microshift[132400]: kubelet I0213 04:06:28.479331 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:28 localhost.localdomain microshift[132400]: kubelet I0213 04:06:28.480023 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:30 localhost.localdomain microshift[132400]: kubelet I0213 04:06:30.353709 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:30 localhost.localdomain microshift[132400]: kubelet I0213 04:06:30.353775 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:33.286767 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:33 localhost.localdomain microshift[132400]: kubelet I0213 04:06:33.354221 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:33 localhost.localdomain microshift[132400]: kubelet I0213 04:06:33.354397 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:36 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:06:36.091758 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:06:36 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:06:36.091797 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:06:36 localhost.localdomain microshift[132400]: kubelet I0213 04:06:36.356728 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:36 localhost.localdomain microshift[132400]: kubelet I0213 04:06:36.356787 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:38.286854 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:38 localhost.localdomain microshift[132400]: kubelet I0213 04:06:38.665074 132400 scope.go:115] "RemoveContainer" containerID="ee55dfbb9ea616c72ded53f696bce24b5a06770506900f4cd969d73091ac5d22" Feb 13 04:06:38 localhost.localdomain microshift[132400]: kubelet E0213 04:06:38.665504 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:06:39 localhost.localdomain microshift[132400]: kubelet I0213 04:06:39.357645 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:39 localhost.localdomain microshift[132400]: kubelet I0213 04:06:39.357729 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:41 localhost.localdomain microshift[132400]: kubelet I0213 04:06:41.664360 132400 scope.go:115] "RemoveContainer" containerID="5e44942d974bda5f4336fc763c07b5b55fe0b97f69337c87ebc48f0c4b0a6294" Feb 13 04:06:41 localhost.localdomain microshift[132400]: kubelet E0213 04:06:41.664829 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:06:42 localhost.localdomain microshift[132400]: kubelet I0213 04:06:42.038431 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="4ee3570583272694abe64187c41f0ceee4cec87ed291672ed6430f399ffe082f" exitCode=255 Feb 13 04:06:42 localhost.localdomain microshift[132400]: kubelet I0213 04:06:42.038603 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:4ee3570583272694abe64187c41f0ceee4cec87ed291672ed6430f399ffe082f} Feb 13 04:06:42 localhost.localdomain microshift[132400]: kubelet I0213 04:06:42.038632 132400 scope.go:115] "RemoveContainer" containerID="7e387da48f96a4b09e4f1b967bf7bfec98e906343b0b6994358216503c15441f" Feb 13 04:06:42 localhost.localdomain microshift[132400]: kubelet I0213 04:06:42.039035 132400 scope.go:115] "RemoveContainer" containerID="4ee3570583272694abe64187c41f0ceee4cec87ed291672ed6430f399ffe082f" Feb 13 04:06:42 localhost.localdomain microshift[132400]: kubelet E0213 04:06:42.039174 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:06:42 localhost.localdomain microshift[132400]: kubelet I0213 04:06:42.358188 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:42 localhost.localdomain microshift[132400]: kubelet I0213 04:06:42.358246 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:42 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:06:42.640932 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:06:42 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:06:42.641098 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:06:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:43.286748 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:45 localhost.localdomain microshift[132400]: kubelet I0213 04:06:45.359331 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:45 localhost.localdomain microshift[132400]: kubelet I0213 04:06:45.359849 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:48.287276 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:48 localhost.localdomain microshift[132400]: kubelet I0213 04:06:48.360058 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:48 localhost.localdomain microshift[132400]: kubelet I0213 04:06:48.360357 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:51 localhost.localdomain microshift[132400]: kubelet I0213 04:06:51.361290 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:51 localhost.localdomain microshift[132400]: kubelet I0213 04:06:51.361383 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:52 localhost.localdomain microshift[132400]: kubelet I0213 04:06:52.663836 132400 scope.go:115] "RemoveContainer" containerID="ee55dfbb9ea616c72ded53f696bce24b5a06770506900f4cd969d73091ac5d22" Feb 13 04:06:53 localhost.localdomain microshift[132400]: kubelet I0213 04:06:53.058353 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:18f5846413e7e974110998e1ddb9e2fb2fb628155b906278c930c2e146a282d0} Feb 13 04:06:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:53.286306 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:54 localhost.localdomain microshift[132400]: kubelet I0213 04:06:54.362025 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:54 localhost.localdomain microshift[132400]: kubelet I0213 04:06:54.362602 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:54 localhost.localdomain microshift[132400]: kubelet I0213 04:06:54.664207 132400 scope.go:115] "RemoveContainer" containerID="4ee3570583272694abe64187c41f0ceee4cec87ed291672ed6430f399ffe082f" Feb 13 04:06:54 localhost.localdomain microshift[132400]: kubelet I0213 04:06:54.778819 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:06:54 localhost.localdomain microshift[132400]: kubelet E0213 04:06:54.779305 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:07:58.779293997 -0500 EST m=+165.959640267 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:06:55 localhost.localdomain microshift[132400]: kubelet I0213 04:06:55.061883 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:8efe4261615b176a45d2aed8a0883cf5179e078c700b49d607c87638ed0858bb} Feb 13 04:06:56 localhost.localdomain microshift[132400]: kubelet I0213 04:06:56.064845 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="18f5846413e7e974110998e1ddb9e2fb2fb628155b906278c930c2e146a282d0" exitCode=1 Feb 13 04:06:56 localhost.localdomain microshift[132400]: kubelet I0213 04:06:56.065124 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:18f5846413e7e974110998e1ddb9e2fb2fb628155b906278c930c2e146a282d0} Feb 13 04:06:56 localhost.localdomain microshift[132400]: kubelet I0213 04:06:56.065150 132400 scope.go:115] "RemoveContainer" containerID="ee55dfbb9ea616c72ded53f696bce24b5a06770506900f4cd969d73091ac5d22" Feb 13 04:06:56 localhost.localdomain microshift[132400]: kubelet I0213 04:06:56.065419 132400 scope.go:115] "RemoveContainer" containerID="18f5846413e7e974110998e1ddb9e2fb2fb628155b906278c930c2e146a282d0" Feb 13 04:06:56 localhost.localdomain microshift[132400]: kubelet E0213 04:06:56.065738 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:06:56 localhost.localdomain microshift[132400]: kubelet I0213 04:06:56.663844 132400 scope.go:115] "RemoveContainer" containerID="5e44942d974bda5f4336fc763c07b5b55fe0b97f69337c87ebc48f0c4b0a6294" Feb 13 04:06:57 localhost.localdomain microshift[132400]: kubelet I0213 04:06:57.068540 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:de6e8ea5e7ed15b95ac8a42c2f7f1de4f82169c5b7cbbaa7235d1f6ab0718920} Feb 13 04:06:57 localhost.localdomain microshift[132400]: kubelet I0213 04:06:57.069632 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:06:57 localhost.localdomain microshift[132400]: kubelet I0213 04:06:57.363560 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:57 localhost.localdomain microshift[132400]: kubelet I0213 04:06:57.363609 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:58 localhost.localdomain microshift[132400]: kubelet I0213 04:06:58.070464 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": dial tcp 10.42.0.6:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:58 localhost.localdomain microshift[132400]: kubelet I0213 04:06:58.070501 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": dial tcp 10.42.0.6:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:06:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:06:58.286892 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:06:59 localhost.localdomain microshift[132400]: kubelet I0213 04:06:59.070611 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:06:59 localhost.localdomain microshift[132400]: kubelet I0213 04:06:59.070674 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:00 localhost.localdomain microshift[132400]: kubelet I0213 04:07:00.071087 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:00 localhost.localdomain microshift[132400]: kubelet I0213 04:07:00.071148 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:00 localhost.localdomain microshift[132400]: kubelet I0213 04:07:00.079133 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="de6e8ea5e7ed15b95ac8a42c2f7f1de4f82169c5b7cbbaa7235d1f6ab0718920" exitCode=1 Feb 13 04:07:00 localhost.localdomain microshift[132400]: kubelet I0213 04:07:00.079183 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:de6e8ea5e7ed15b95ac8a42c2f7f1de4f82169c5b7cbbaa7235d1f6ab0718920} Feb 13 04:07:00 localhost.localdomain microshift[132400]: kubelet I0213 04:07:00.079221 132400 scope.go:115] "RemoveContainer" containerID="5e44942d974bda5f4336fc763c07b5b55fe0b97f69337c87ebc48f0c4b0a6294" Feb 13 04:07:00 localhost.localdomain microshift[132400]: kubelet I0213 04:07:00.079698 132400 scope.go:115] "RemoveContainer" containerID="de6e8ea5e7ed15b95ac8a42c2f7f1de4f82169c5b7cbbaa7235d1f6ab0718920" Feb 13 04:07:00 localhost.localdomain microshift[132400]: kubelet E0213 04:07:00.080024 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:07:00 localhost.localdomain microshift[132400]: kubelet I0213 04:07:00.364063 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:00 localhost.localdomain microshift[132400]: kubelet I0213 04:07:00.364132 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:03.286274 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:03 localhost.localdomain microshift[132400]: kubelet I0213 04:07:03.364378 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:03 localhost.localdomain microshift[132400]: kubelet I0213 04:07:03.364556 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:04 localhost.localdomain microshift[132400]: kubelet I0213 04:07:04.632653 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:04 localhost.localdomain microshift[132400]: kubelet I0213 04:07:04.633045 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:06 localhost.localdomain microshift[132400]: kubelet I0213 04:07:06.365339 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:06 localhost.localdomain microshift[132400]: kubelet I0213 04:07:06.365599 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:08.287227 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:08 localhost.localdomain microshift[132400]: kubelet I0213 04:07:08.663496 132400 scope.go:115] "RemoveContainer" containerID="18f5846413e7e974110998e1ddb9e2fb2fb628155b906278c930c2e146a282d0" Feb 13 04:07:08 localhost.localdomain microshift[132400]: kubelet E0213 04:07:08.663950 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:07:09 localhost.localdomain microshift[132400]: kubelet I0213 04:07:09.366704 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:09 localhost.localdomain microshift[132400]: kubelet I0213 04:07:09.367024 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:10 localhost.localdomain microshift[132400]: kubelet I0213 04:07:10.664135 132400 scope.go:115] "RemoveContainer" containerID="de6e8ea5e7ed15b95ac8a42c2f7f1de4f82169c5b7cbbaa7235d1f6ab0718920" Feb 13 04:07:10 localhost.localdomain microshift[132400]: kubelet E0213 04:07:10.664516 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:07:12 localhost.localdomain microshift[132400]: kubelet I0213 04:07:12.367800 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:12 localhost.localdomain microshift[132400]: kubelet I0213 04:07:12.368218 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:13.286769 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:13 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:07:13.551965 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:07:13 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:07:13.552000 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:07:14 localhost.localdomain microshift[132400]: kubelet I0213 04:07:14.632235 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:14 localhost.localdomain microshift[132400]: kubelet I0213 04:07:14.632618 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:15 localhost.localdomain microshift[132400]: kubelet I0213 04:07:15.369100 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:15 localhost.localdomain microshift[132400]: kubelet I0213 04:07:15.369159 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:18.286264 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:18 localhost.localdomain microshift[132400]: kubelet I0213 04:07:18.369693 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:18 localhost.localdomain microshift[132400]: kubelet I0213 04:07:18.369891 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:18 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:07:18.875395 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:07:18 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:07:18.875577 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:07:20 localhost.localdomain microshift[132400]: kubelet I0213 04:07:20.902025 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:07:20 localhost.localdomain microshift[132400]: kubelet I0213 04:07:20.902455 132400 scope.go:115] "RemoveContainer" containerID="18f5846413e7e974110998e1ddb9e2fb2fb628155b906278c930c2e146a282d0" Feb 13 04:07:20 localhost.localdomain microshift[132400]: kubelet E0213 04:07:20.902773 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:07:21 localhost.localdomain microshift[132400]: kubelet I0213 04:07:21.370844 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:21 localhost.localdomain microshift[132400]: kubelet I0213 04:07:21.370894 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:23.286856 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:24 localhost.localdomain microshift[132400]: kubelet I0213 04:07:24.372022 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:24 localhost.localdomain microshift[132400]: kubelet I0213 04:07:24.372530 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:24 localhost.localdomain microshift[132400]: kubelet I0213 04:07:24.631455 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:24 localhost.localdomain microshift[132400]: kubelet I0213 04:07:24.631773 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:25 localhost.localdomain microshift[132400]: kubelet I0213 04:07:25.666969 132400 scope.go:115] "RemoveContainer" containerID="de6e8ea5e7ed15b95ac8a42c2f7f1de4f82169c5b7cbbaa7235d1f6ab0718920" Feb 13 04:07:25 localhost.localdomain microshift[132400]: kubelet E0213 04:07:25.667248 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:07:26 localhost.localdomain microshift[132400]: kubelet I0213 04:07:26.192953 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:07:26 localhost.localdomain microshift[132400]: kubelet I0213 04:07:26.193462 132400 scope.go:115] "RemoveContainer" containerID="de6e8ea5e7ed15b95ac8a42c2f7f1de4f82169c5b7cbbaa7235d1f6ab0718920" Feb 13 04:07:26 localhost.localdomain microshift[132400]: kubelet E0213 04:07:26.194173 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:07:27 localhost.localdomain microshift[132400]: kubelet I0213 04:07:27.372858 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:27 localhost.localdomain microshift[132400]: kubelet I0213 04:07:27.373288 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:28.287323 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:29 localhost.localdomain microshift[132400]: kubelet I0213 04:07:29.122026 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="8efe4261615b176a45d2aed8a0883cf5179e078c700b49d607c87638ed0858bb" exitCode=255 Feb 13 04:07:29 localhost.localdomain microshift[132400]: kubelet I0213 04:07:29.122073 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:8efe4261615b176a45d2aed8a0883cf5179e078c700b49d607c87638ed0858bb} Feb 13 04:07:29 localhost.localdomain microshift[132400]: kubelet I0213 04:07:29.122330 132400 scope.go:115] "RemoveContainer" containerID="4ee3570583272694abe64187c41f0ceee4cec87ed291672ed6430f399ffe082f" Feb 13 04:07:29 localhost.localdomain microshift[132400]: kubelet I0213 04:07:29.122561 132400 scope.go:115] "RemoveContainer" containerID="8efe4261615b176a45d2aed8a0883cf5179e078c700b49d607c87638ed0858bb" Feb 13 04:07:29 localhost.localdomain microshift[132400]: kubelet E0213 04:07:29.122732 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:07:30 localhost.localdomain microshift[132400]: kubelet I0213 04:07:30.373848 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:30 localhost.localdomain microshift[132400]: kubelet I0213 04:07:30.373899 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:31 localhost.localdomain microshift[132400]: kubelet I0213 04:07:31.663721 132400 scope.go:115] "RemoveContainer" containerID="18f5846413e7e974110998e1ddb9e2fb2fb628155b906278c930c2e146a282d0" Feb 13 04:07:31 localhost.localdomain microshift[132400]: kubelet E0213 04:07:31.664722 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:07:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:33.286565 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:33 localhost.localdomain microshift[132400]: kubelet I0213 04:07:33.374602 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:33 localhost.localdomain microshift[132400]: kubelet I0213 04:07:33.374793 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:34 localhost.localdomain microshift[132400]: kubelet I0213 04:07:34.632188 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": dial tcp 10.42.0.7:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:34 localhost.localdomain microshift[132400]: kubelet I0213 04:07:34.632478 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": dial tcp 10.42.0.7:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:36 localhost.localdomain microshift[132400]: kubelet I0213 04:07:36.375129 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:36 localhost.localdomain microshift[132400]: kubelet I0213 04:07:36.375176 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:36 localhost.localdomain microshift[132400]: kubelet I0213 04:07:36.665101 132400 scope.go:115] "RemoveContainer" containerID="de6e8ea5e7ed15b95ac8a42c2f7f1de4f82169c5b7cbbaa7235d1f6ab0718920" Feb 13 04:07:36 localhost.localdomain microshift[132400]: kubelet E0213 04:07:36.665476 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:07:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:38.287155 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:39 localhost.localdomain microshift[132400]: kubelet I0213 04:07:39.375307 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:39 localhost.localdomain microshift[132400]: kubelet I0213 04:07:39.375395 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:41 localhost.localdomain microshift[132400]: kubelet I0213 04:07:41.664018 132400 scope.go:115] "RemoveContainer" containerID="8efe4261615b176a45d2aed8a0883cf5179e078c700b49d607c87638ed0858bb" Feb 13 04:07:41 localhost.localdomain microshift[132400]: kubelet E0213 04:07:41.664198 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:07:42 localhost.localdomain microshift[132400]: kubelet I0213 04:07:42.375984 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:42 localhost.localdomain microshift[132400]: kubelet I0213 04:07:42.376033 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:43.287496 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:44 localhost.localdomain microshift[132400]: kubelet I0213 04:07:44.632142 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:44 localhost.localdomain microshift[132400]: kubelet I0213 04:07:44.632734 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:44 localhost.localdomain microshift[132400]: kubelet I0213 04:07:44.633064 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:07:44 localhost.localdomain microshift[132400]: kubelet I0213 04:07:44.633928 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:9ccf6bfc9d6e828eee30e1faf64416af0072c2dd74b6ed6d27acb82684066655} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:07:44 localhost.localdomain microshift[132400]: kubelet I0213 04:07:44.634245 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://9ccf6bfc9d6e828eee30e1faf64416af0072c2dd74b6ed6d27acb82684066655" gracePeriod=30 Feb 13 04:07:45 localhost.localdomain microshift[132400]: kubelet I0213 04:07:45.376587 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:45 localhost.localdomain microshift[132400]: kubelet I0213 04:07:45.376979 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:45 localhost.localdomain microshift[132400]: kubelet I0213 04:07:45.664068 132400 scope.go:115] "RemoveContainer" containerID="18f5846413e7e974110998e1ddb9e2fb2fb628155b906278c930c2e146a282d0" Feb 13 04:07:46 localhost.localdomain microshift[132400]: kubelet I0213 04:07:46.146982 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f} Feb 13 04:07:47 localhost.localdomain microshift[132400]: kubelet I0213 04:07:47.664060 132400 scope.go:115] "RemoveContainer" containerID="de6e8ea5e7ed15b95ac8a42c2f7f1de4f82169c5b7cbbaa7235d1f6ab0718920" Feb 13 04:07:48 localhost.localdomain microshift[132400]: kubelet I0213 04:07:48.152028 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9} Feb 13 04:07:48 localhost.localdomain microshift[132400]: kubelet I0213 04:07:48.152890 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:07:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:48.286617 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:48 localhost.localdomain microshift[132400]: kubelet I0213 04:07:48.377688 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:48 localhost.localdomain microshift[132400]: kubelet I0213 04:07:48.377758 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:49 localhost.localdomain microshift[132400]: kubelet I0213 04:07:49.152814 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:49 localhost.localdomain microshift[132400]: kubelet I0213 04:07:49.153194 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:49 localhost.localdomain microshift[132400]: kubelet I0213 04:07:49.154895 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f" exitCode=1 Feb 13 04:07:49 localhost.localdomain microshift[132400]: kubelet I0213 04:07:49.155590 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f} Feb 13 04:07:49 localhost.localdomain microshift[132400]: kubelet I0213 04:07:49.155689 132400 scope.go:115] "RemoveContainer" containerID="18f5846413e7e974110998e1ddb9e2fb2fb628155b906278c930c2e146a282d0" Feb 13 04:07:49 localhost.localdomain microshift[132400]: kubelet I0213 04:07:49.155932 132400 scope.go:115] "RemoveContainer" containerID="0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f" Feb 13 04:07:49 localhost.localdomain microshift[132400]: kubelet E0213 04:07:49.156224 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:07:50 localhost.localdomain microshift[132400]: kubelet I0213 04:07:50.155648 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": dial tcp 10.42.0.6:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:50 localhost.localdomain microshift[132400]: kubelet I0213 04:07:50.155698 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": dial tcp 10.42.0.6:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:51 localhost.localdomain microshift[132400]: kubelet I0213 04:07:51.160795 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9" exitCode=1 Feb 13 04:07:51 localhost.localdomain microshift[132400]: kubelet I0213 04:07:51.160823 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9} Feb 13 04:07:51 localhost.localdomain microshift[132400]: kubelet I0213 04:07:51.160846 132400 scope.go:115] "RemoveContainer" containerID="de6e8ea5e7ed15b95ac8a42c2f7f1de4f82169c5b7cbbaa7235d1f6ab0718920" Feb 13 04:07:51 localhost.localdomain microshift[132400]: kubelet I0213 04:07:51.161156 132400 scope.go:115] "RemoveContainer" containerID="2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9" Feb 13 04:07:51 localhost.localdomain microshift[132400]: kubelet E0213 04:07:51.163799 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:07:51 localhost.localdomain microshift[132400]: kubelet I0213 04:07:51.378869 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:51 localhost.localdomain microshift[132400]: kubelet I0213 04:07:51.378933 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:53.286374 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:53 localhost.localdomain microshift[132400]: kubelet I0213 04:07:53.664412 132400 scope.go:115] "RemoveContainer" containerID="8efe4261615b176a45d2aed8a0883cf5179e078c700b49d607c87638ed0858bb" Feb 13 04:07:53 localhost.localdomain microshift[132400]: kubelet E0213 04:07:53.999476 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:07:53 localhost.localdomain microshift[132400]: kubelet E0213 04:07:53.999504 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:07:54 localhost.localdomain microshift[132400]: kubelet I0213 04:07:54.168491 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:2f0e6455b9700877c2b4ef5df53c74eb53022a01b37b4c261572d5768020edf1} Feb 13 04:07:54 localhost.localdomain microshift[132400]: kubelet I0213 04:07:54.379063 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:54 localhost.localdomain microshift[132400]: kubelet I0213 04:07:54.379119 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:56 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:07:56.716053 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:07:56 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:07:56.716105 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:07:57 localhost.localdomain microshift[132400]: kubelet I0213 04:07:57.379613 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:07:57 localhost.localdomain microshift[132400]: kubelet I0213 04:07:57.379706 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:07:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:07:58.286380 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:07:58 localhost.localdomain microshift[132400]: kubelet I0213 04:07:58.801905 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:07:58 localhost.localdomain microshift[132400]: kubelet E0213 04:07:58.802071 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:10:00.802058551 -0500 EST m=+287.982404829 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:08:00 localhost.localdomain microshift[132400]: kubelet I0213 04:08:00.380469 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:00 localhost.localdomain microshift[132400]: kubelet I0213 04:08:00.380896 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:01 localhost.localdomain microshift[132400]: kubelet I0213 04:08:01.664389 132400 scope.go:115] "RemoveContainer" containerID="0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f" Feb 13 04:08:01 localhost.localdomain microshift[132400]: kubelet E0213 04:08:01.665221 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:08:02 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:08:02.354582 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:08:02 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:08:02.354605 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:08:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:03.286845 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:08:03 localhost.localdomain microshift[132400]: kubelet I0213 04:08:03.381718 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:03 localhost.localdomain microshift[132400]: kubelet I0213 04:08:03.381992 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:04 localhost.localdomain microshift[132400]: kubelet I0213 04:08:04.665249 132400 scope.go:115] "RemoveContainer" containerID="2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9" Feb 13 04:08:04 localhost.localdomain microshift[132400]: kubelet E0213 04:08:04.665802 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:08:05 localhost.localdomain microshift[132400]: kubelet I0213 04:08:05.185154 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="9ccf6bfc9d6e828eee30e1faf64416af0072c2dd74b6ed6d27acb82684066655" exitCode=0 Feb 13 04:08:05 localhost.localdomain microshift[132400]: kubelet I0213 04:08:05.185183 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:9ccf6bfc9d6e828eee30e1faf64416af0072c2dd74b6ed6d27acb82684066655} Feb 13 04:08:05 localhost.localdomain microshift[132400]: kubelet I0213 04:08:05.185198 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:c480a2c03c6a3de824aa03826b8a4f663d98fd932718f38eeef34f910078d8b0} Feb 13 04:08:05 localhost.localdomain microshift[132400]: kubelet I0213 04:08:05.185211 132400 scope.go:115] "RemoveContainer" containerID="a3204dbcdf71733932b6a8aa25e9862e76bc7e8fb6ff481d6ec7066d415ffa1f" Feb 13 04:08:06 localhost.localdomain microshift[132400]: kubelet I0213 04:08:06.187197 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 04:08:06 localhost.localdomain microshift[132400]: kubelet I0213 04:08:06.382903 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:06 localhost.localdomain microshift[132400]: kubelet I0213 04:08:06.382991 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:06 localhost.localdomain microshift[132400]: kubelet I0213 04:08:06.383059 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:08:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:08.286246 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:08:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:13.286336 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:08:13 localhost.localdomain microshift[132400]: kubelet I0213 04:08:13.664391 132400 scope.go:115] "RemoveContainer" containerID="0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f" Feb 13 04:08:13 localhost.localdomain microshift[132400]: kubelet E0213 04:08:13.664801 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:08:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:18.286278 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:08:18 localhost.localdomain microshift[132400]: kubelet I0213 04:08:18.346888 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:18 localhost.localdomain microshift[132400]: kubelet I0213 04:08:18.346936 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:19 localhost.localdomain microshift[132400]: kubelet I0213 04:08:19.664244 132400 scope.go:115] "RemoveContainer" containerID="2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9" Feb 13 04:08:19 localhost.localdomain microshift[132400]: kubelet E0213 04:08:19.664608 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:08:20 localhost.localdomain microshift[132400]: kubelet I0213 04:08:20.902241 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:08:20 localhost.localdomain microshift[132400]: kubelet I0213 04:08:20.902948 132400 scope.go:115] "RemoveContainer" containerID="0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f" Feb 13 04:08:20 localhost.localdomain microshift[132400]: kubelet E0213 04:08:20.903274 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:08:21 localhost.localdomain microshift[132400]: kubelet I0213 04:08:21.347983 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:21 localhost.localdomain microshift[132400]: kubelet I0213 04:08:21.348035 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:23.287160 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:08:24 localhost.localdomain microshift[132400]: kubelet I0213 04:08:24.348578 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:24 localhost.localdomain microshift[132400]: kubelet I0213 04:08:24.349172 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:26 localhost.localdomain microshift[132400]: kubelet I0213 04:08:26.192089 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:08:26 localhost.localdomain microshift[132400]: kubelet I0213 04:08:26.192375 132400 scope.go:115] "RemoveContainer" containerID="2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9" Feb 13 04:08:26 localhost.localdomain microshift[132400]: kubelet E0213 04:08:26.192693 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:08:27 localhost.localdomain microshift[132400]: kubelet I0213 04:08:27.350073 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:27 localhost.localdomain microshift[132400]: kubelet I0213 04:08:27.350127 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:28 localhost.localdomain microshift[132400]: kubelet I0213 04:08:28.220306 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="2f0e6455b9700877c2b4ef5df53c74eb53022a01b37b4c261572d5768020edf1" exitCode=255 Feb 13 04:08:28 localhost.localdomain microshift[132400]: kubelet I0213 04:08:28.220481 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:2f0e6455b9700877c2b4ef5df53c74eb53022a01b37b4c261572d5768020edf1} Feb 13 04:08:28 localhost.localdomain microshift[132400]: kubelet I0213 04:08:28.220525 132400 scope.go:115] "RemoveContainer" containerID="8efe4261615b176a45d2aed8a0883cf5179e078c700b49d607c87638ed0858bb" Feb 13 04:08:28 localhost.localdomain microshift[132400]: kubelet I0213 04:08:28.220761 132400 scope.go:115] "RemoveContainer" containerID="2f0e6455b9700877c2b4ef5df53c74eb53022a01b37b4c261572d5768020edf1" Feb 13 04:08:28 localhost.localdomain microshift[132400]: kubelet E0213 04:08:28.220945 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:08:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:28.287037 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:08:30 localhost.localdomain microshift[132400]: kubelet I0213 04:08:30.350445 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:30 localhost.localdomain microshift[132400]: kubelet I0213 04:08:30.350510 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:32 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:08:32.610781 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:08:32 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:08:32.610800 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:08:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:33.286720 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:08:33 localhost.localdomain microshift[132400]: kubelet I0213 04:08:33.350890 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:33 localhost.localdomain microshift[132400]: kubelet I0213 04:08:33.351072 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:35 localhost.localdomain microshift[132400]: kubelet I0213 04:08:35.666095 132400 scope.go:115] "RemoveContainer" containerID="0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f" Feb 13 04:08:35 localhost.localdomain microshift[132400]: kubelet E0213 04:08:35.666841 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:08:36 localhost.localdomain microshift[132400]: kubelet I0213 04:08:36.351959 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:36 localhost.localdomain microshift[132400]: kubelet I0213 04:08:36.352146 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:38.286612 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:08:39 localhost.localdomain microshift[132400]: kubelet I0213 04:08:39.352850 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:39 localhost.localdomain microshift[132400]: kubelet I0213 04:08:39.352894 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:39 localhost.localdomain microshift[132400]: kubelet I0213 04:08:39.664185 132400 scope.go:115] "RemoveContainer" containerID="2f0e6455b9700877c2b4ef5df53c74eb53022a01b37b4c261572d5768020edf1" Feb 13 04:08:39 localhost.localdomain microshift[132400]: kubelet E0213 04:08:39.664361 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:08:40 localhost.localdomain microshift[132400]: kubelet I0213 04:08:40.664512 132400 scope.go:115] "RemoveContainer" containerID="2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9" Feb 13 04:08:40 localhost.localdomain microshift[132400]: kubelet E0213 04:08:40.665510 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:08:42 localhost.localdomain microshift[132400]: kubelet I0213 04:08:42.353904 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:42 localhost.localdomain microshift[132400]: kubelet I0213 04:08:42.353950 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:43.286363 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:08:45 localhost.localdomain microshift[132400]: kubelet I0213 04:08:45.354512 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:45 localhost.localdomain microshift[132400]: kubelet I0213 04:08:45.354583 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:47 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:08:47.637451 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:08:47 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:08:47.637474 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:08:47 localhost.localdomain microshift[132400]: kubelet I0213 04:08:47.663797 132400 scope.go:115] "RemoveContainer" containerID="0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f" Feb 13 04:08:47 localhost.localdomain microshift[132400]: kubelet E0213 04:08:47.664242 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:08:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:48.286913 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:08:48 localhost.localdomain microshift[132400]: kubelet I0213 04:08:48.355348 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:48 localhost.localdomain microshift[132400]: kubelet I0213 04:08:48.355395 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:51 localhost.localdomain microshift[132400]: kubelet I0213 04:08:51.356175 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:51 localhost.localdomain microshift[132400]: kubelet I0213 04:08:51.356217 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:51 localhost.localdomain microshift[132400]: kubelet I0213 04:08:51.663870 132400 scope.go:115] "RemoveContainer" containerID="2f0e6455b9700877c2b4ef5df53c74eb53022a01b37b4c261572d5768020edf1" Feb 13 04:08:51 localhost.localdomain microshift[132400]: kubelet E0213 04:08:51.664061 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:08:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:53.287179 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:08:54 localhost.localdomain microshift[132400]: kubelet I0213 04:08:54.356711 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:54 localhost.localdomain microshift[132400]: kubelet I0213 04:08:54.357012 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:55 localhost.localdomain microshift[132400]: kubelet I0213 04:08:55.666190 132400 scope.go:115] "RemoveContainer" containerID="2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9" Feb 13 04:08:55 localhost.localdomain microshift[132400]: kubelet E0213 04:08:55.666500 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:08:57 localhost.localdomain microshift[132400]: kubelet I0213 04:08:57.357770 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:08:57 localhost.localdomain microshift[132400]: kubelet I0213 04:08:57.358461 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:08:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:08:58.287161 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:00 localhost.localdomain microshift[132400]: kubelet I0213 04:09:00.358715 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:00 localhost.localdomain microshift[132400]: kubelet I0213 04:09:00.358770 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:00 localhost.localdomain microshift[132400]: kubelet I0213 04:09:00.663765 132400 scope.go:115] "RemoveContainer" containerID="0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f" Feb 13 04:09:00 localhost.localdomain microshift[132400]: kubelet E0213 04:09:00.664227 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:09:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:03.286549 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:03 localhost.localdomain microshift[132400]: kubelet I0213 04:09:03.359637 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:03 localhost.localdomain microshift[132400]: kubelet I0213 04:09:03.359696 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:06 localhost.localdomain microshift[132400]: kubelet I0213 04:09:06.360305 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:06 localhost.localdomain microshift[132400]: kubelet I0213 04:09:06.360352 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:06 localhost.localdomain microshift[132400]: kubelet I0213 04:09:06.666153 132400 scope.go:115] "RemoveContainer" containerID="2f0e6455b9700877c2b4ef5df53c74eb53022a01b37b4c261572d5768020edf1" Feb 13 04:09:06 localhost.localdomain microshift[132400]: kubelet E0213 04:09:06.666606 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:09:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:08.286510 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:08 localhost.localdomain microshift[132400]: kubelet I0213 04:09:08.664591 132400 scope.go:115] "RemoveContainer" containerID="2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9" Feb 13 04:09:08 localhost.localdomain microshift[132400]: kubelet E0213 04:09:08.665161 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:09:09 localhost.localdomain microshift[132400]: kubelet I0213 04:09:09.361423 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:09 localhost.localdomain microshift[132400]: kubelet I0213 04:09:09.361460 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:12 localhost.localdomain microshift[132400]: kubelet I0213 04:09:12.361532 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:12 localhost.localdomain microshift[132400]: kubelet I0213 04:09:12.361594 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:12 localhost.localdomain microshift[132400]: kubelet I0213 04:09:12.665362 132400 scope.go:115] "RemoveContainer" containerID="0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f" Feb 13 04:09:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:13.286608 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:13 localhost.localdomain microshift[132400]: kubelet I0213 04:09:13.291921 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b} Feb 13 04:09:14 localhost.localdomain microshift[132400]: kubelet I0213 04:09:14.631314 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:14 localhost.localdomain microshift[132400]: kubelet I0213 04:09:14.631352 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:15 localhost.localdomain microshift[132400]: kubelet I0213 04:09:15.362772 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:15 localhost.localdomain microshift[132400]: kubelet I0213 04:09:15.362960 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:16 localhost.localdomain microshift[132400]: kubelet I0213 04:09:16.297435 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" exitCode=1 Feb 13 04:09:16 localhost.localdomain microshift[132400]: kubelet I0213 04:09:16.297465 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b} Feb 13 04:09:16 localhost.localdomain microshift[132400]: kubelet I0213 04:09:16.297485 132400 scope.go:115] "RemoveContainer" containerID="0db7fcb465a3614aff58ac69e9f32702fbbc002c19f4180b9437256454db073f" Feb 13 04:09:16 localhost.localdomain microshift[132400]: kubelet I0213 04:09:16.297805 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:09:16 localhost.localdomain microshift[132400]: kubelet E0213 04:09:16.298039 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:09:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:18.286820 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:18 localhost.localdomain microshift[132400]: kubelet I0213 04:09:18.363886 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:18 localhost.localdomain microshift[132400]: kubelet I0213 04:09:18.364079 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:19 localhost.localdomain microshift[132400]: kubelet I0213 04:09:19.663277 132400 scope.go:115] "RemoveContainer" containerID="2f0e6455b9700877c2b4ef5df53c74eb53022a01b37b4c261572d5768020edf1" Feb 13 04:09:20 localhost.localdomain microshift[132400]: kubelet I0213 04:09:20.306094 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787} Feb 13 04:09:20 localhost.localdomain microshift[132400]: kubelet I0213 04:09:20.664082 132400 scope.go:115] "RemoveContainer" containerID="2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9" Feb 13 04:09:20 localhost.localdomain microshift[132400]: kubelet I0213 04:09:20.902119 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:09:20 localhost.localdomain microshift[132400]: kubelet I0213 04:09:20.902561 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:09:20 localhost.localdomain microshift[132400]: kubelet E0213 04:09:20.902861 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:09:21 localhost.localdomain microshift[132400]: kubelet I0213 04:09:21.308907 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e} Feb 13 04:09:21 localhost.localdomain microshift[132400]: kubelet I0213 04:09:21.309704 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:09:21 localhost.localdomain microshift[132400]: kubelet I0213 04:09:21.364953 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:21 localhost.localdomain microshift[132400]: kubelet I0213 04:09:21.365299 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:22 localhost.localdomain microshift[132400]: kubelet I0213 04:09:22.309512 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:22 localhost.localdomain microshift[132400]: kubelet I0213 04:09:22.309987 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:23.287147 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:23 localhost.localdomain microshift[132400]: kubelet I0213 04:09:23.311655 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": dial tcp 10.42.0.6:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:23 localhost.localdomain microshift[132400]: kubelet I0213 04:09:23.311719 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": dial tcp 10.42.0.6:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:24 localhost.localdomain microshift[132400]: kubelet I0213 04:09:24.314757 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" exitCode=1 Feb 13 04:09:24 localhost.localdomain microshift[132400]: kubelet I0213 04:09:24.315023 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e} Feb 13 04:09:24 localhost.localdomain microshift[132400]: kubelet I0213 04:09:24.315074 132400 scope.go:115] "RemoveContainer" containerID="2eb6b7d5dceaa3be2f6188507468f943be62a7a44f7d3694dcd687cc32f0abf9" Feb 13 04:09:24 localhost.localdomain microshift[132400]: kubelet I0213 04:09:24.315322 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:09:24 localhost.localdomain microshift[132400]: kubelet E0213 04:09:24.315635 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:09:24 localhost.localdomain microshift[132400]: kubelet I0213 04:09:24.366270 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:24 localhost.localdomain microshift[132400]: kubelet I0213 04:09:24.366316 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:24 localhost.localdomain microshift[132400]: kubelet I0213 04:09:24.631811 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:24 localhost.localdomain microshift[132400]: kubelet I0213 04:09:24.631857 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:24 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:09:24.998967 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:09:24 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:09:24.999119 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:09:26 localhost.localdomain microshift[132400]: kubelet I0213 04:09:26.192814 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:09:26 localhost.localdomain microshift[132400]: kubelet I0213 04:09:26.194055 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:09:26 localhost.localdomain microshift[132400]: kubelet E0213 04:09:26.194994 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:09:27 localhost.localdomain microshift[132400]: kubelet I0213 04:09:27.367145 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:27 localhost.localdomain microshift[132400]: kubelet I0213 04:09:27.367203 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:28.286544 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:30 localhost.localdomain microshift[132400]: kubelet I0213 04:09:30.367360 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:30 localhost.localdomain microshift[132400]: kubelet I0213 04:09:30.367795 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:31 localhost.localdomain microshift[132400]: kubelet I0213 04:09:31.663586 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:09:31 localhost.localdomain microshift[132400]: kubelet E0213 04:09:31.664359 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:09:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:33.287045 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:33 localhost.localdomain microshift[132400]: kubelet I0213 04:09:33.368357 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:33 localhost.localdomain microshift[132400]: kubelet I0213 04:09:33.368540 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:34 localhost.localdomain microshift[132400]: kubelet I0213 04:09:34.632148 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:34 localhost.localdomain microshift[132400]: kubelet I0213 04:09:34.632549 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:36 localhost.localdomain microshift[132400]: kubelet I0213 04:09:36.368810 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:36 localhost.localdomain microshift[132400]: kubelet I0213 04:09:36.368853 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:38.287100 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:39 localhost.localdomain microshift[132400]: kubelet I0213 04:09:39.369525 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:39 localhost.localdomain microshift[132400]: kubelet I0213 04:09:39.369603 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:39 localhost.localdomain microshift[132400]: kubelet I0213 04:09:39.663886 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:09:39 localhost.localdomain microshift[132400]: kubelet E0213 04:09:39.664190 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:09:41 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:09:41.726382 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:09:41 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:09:41.726740 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:09:42 localhost.localdomain microshift[132400]: kubelet I0213 04:09:42.369740 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:42 localhost.localdomain microshift[132400]: kubelet I0213 04:09:42.370031 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:43.286267 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:44 localhost.localdomain microshift[132400]: kubelet I0213 04:09:44.632064 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:44 localhost.localdomain microshift[132400]: kubelet I0213 04:09:44.632487 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:44 localhost.localdomain microshift[132400]: kubelet I0213 04:09:44.665102 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:09:44 localhost.localdomain microshift[132400]: kubelet E0213 04:09:44.665353 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:09:45 localhost.localdomain microshift[132400]: kubelet I0213 04:09:45.371214 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:45 localhost.localdomain microshift[132400]: kubelet I0213 04:09:45.371274 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:48.286508 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:48 localhost.localdomain microshift[132400]: kubelet I0213 04:09:48.371376 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:48 localhost.localdomain microshift[132400]: kubelet I0213 04:09:48.371596 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:51 localhost.localdomain microshift[132400]: kubelet I0213 04:09:51.372007 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:51 localhost.localdomain microshift[132400]: kubelet I0213 04:09:51.372093 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:52 localhost.localdomain microshift[132400]: kubelet I0213 04:09:52.664086 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:09:52 localhost.localdomain microshift[132400]: kubelet E0213 04:09:52.664723 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:09:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:53.286246 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet I0213 04:09:54.368279 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787" exitCode=255 Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet I0213 04:09:54.368315 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787} Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet I0213 04:09:54.368341 132400 scope.go:115] "RemoveContainer" containerID="2f0e6455b9700877c2b4ef5df53c74eb53022a01b37b4c261572d5768020edf1" Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet I0213 04:09:54.368544 132400 scope.go:115] "RemoveContainer" containerID="b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787" Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet E0213 04:09:54.368762 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet I0213 04:09:54.373145 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet I0213 04:09:54.373276 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet I0213 04:09:54.631859 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet I0213 04:09:54.632049 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet I0213 04:09:54.632104 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet I0213 04:09:54.632453 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:c480a2c03c6a3de824aa03826b8a4f663d98fd932718f38eeef34f910078d8b0} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:09:54 localhost.localdomain microshift[132400]: kubelet I0213 04:09:54.632599 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://c480a2c03c6a3de824aa03826b8a4f663d98fd932718f38eeef34f910078d8b0" gracePeriod=30 Feb 13 04:09:56 localhost.localdomain microshift[132400]: kubelet I0213 04:09:56.665200 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:09:56 localhost.localdomain microshift[132400]: kubelet E0213 04:09:56.665761 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:09:57 localhost.localdomain microshift[132400]: kubelet E0213 04:09:57.170323 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:09:57 localhost.localdomain microshift[132400]: kubelet E0213 04:09:57.170491 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:09:57 localhost.localdomain microshift[132400]: kubelet I0213 04:09:57.373846 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:09:57 localhost.localdomain microshift[132400]: kubelet I0213 04:09:57.373884 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:09:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:09:58.287039 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:00 localhost.localdomain microshift[132400]: kubelet I0213 04:10:00.374705 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:00 localhost.localdomain microshift[132400]: kubelet I0213 04:10:00.374750 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:00 localhost.localdomain microshift[132400]: kubelet I0213 04:10:00.888440 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:10:00 localhost.localdomain microshift[132400]: kubelet E0213 04:10:00.888556 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:12:02.888545401 -0500 EST m=+410.068891681 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:10:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:03.287310 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:03 localhost.localdomain microshift[132400]: kubelet I0213 04:10:03.375340 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:03 localhost.localdomain microshift[132400]: kubelet I0213 04:10:03.375399 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:06 localhost.localdomain microshift[132400]: kubelet I0213 04:10:06.375743 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:06 localhost.localdomain microshift[132400]: kubelet I0213 04:10:06.375785 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:07 localhost.localdomain microshift[132400]: kubelet I0213 04:10:07.664302 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:10:07 localhost.localdomain microshift[132400]: kubelet E0213 04:10:07.664605 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:10:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:08.286461 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:09 localhost.localdomain microshift[132400]: kubelet I0213 04:10:09.376393 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:09 localhost.localdomain microshift[132400]: kubelet I0213 04:10:09.376457 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:09 localhost.localdomain microshift[132400]: kubelet I0213 04:10:09.663306 132400 scope.go:115] "RemoveContainer" containerID="b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787" Feb 13 04:10:09 localhost.localdomain microshift[132400]: kubelet E0213 04:10:09.663723 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:10:11 localhost.localdomain microshift[132400]: kubelet I0213 04:10:11.664469 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:10:11 localhost.localdomain microshift[132400]: kubelet E0213 04:10:11.664914 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:10:12 localhost.localdomain microshift[132400]: kubelet I0213 04:10:12.377490 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:12 localhost.localdomain microshift[132400]: kubelet I0213 04:10:12.377547 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:13.286827 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:14 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:10:14.397251 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:10:14 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:10:14.397535 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:10:15 localhost.localdomain microshift[132400]: kubelet I0213 04:10:15.377680 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:15 localhost.localdomain microshift[132400]: kubelet I0213 04:10:15.377731 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:15 localhost.localdomain microshift[132400]: kubelet I0213 04:10:15.398442 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="c480a2c03c6a3de824aa03826b8a4f663d98fd932718f38eeef34f910078d8b0" exitCode=0 Feb 13 04:10:15 localhost.localdomain microshift[132400]: kubelet I0213 04:10:15.398865 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:c480a2c03c6a3de824aa03826b8a4f663d98fd932718f38eeef34f910078d8b0} Feb 13 04:10:15 localhost.localdomain microshift[132400]: kubelet I0213 04:10:15.398928 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:e79d1d2116be4b626b3a6b931b35ecc671e3164b2cd42c61457883c8252eda7d} Feb 13 04:10:15 localhost.localdomain microshift[132400]: kubelet I0213 04:10:15.398967 132400 scope.go:115] "RemoveContainer" containerID="9ccf6bfc9d6e828eee30e1faf64416af0072c2dd74b6ed6d27acb82684066655" Feb 13 04:10:16 localhost.localdomain microshift[132400]: kubelet I0213 04:10:16.401337 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 04:10:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:18.286626 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:18 localhost.localdomain microshift[132400]: kubelet I0213 04:10:18.378181 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:18 localhost.localdomain microshift[132400]: kubelet I0213 04:10:18.378361 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:18 localhost.localdomain microshift[132400]: kubelet I0213 04:10:18.378426 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:10:18 localhost.localdomain microshift[132400]: kubelet I0213 04:10:18.664321 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:10:18 localhost.localdomain microshift[132400]: kubelet E0213 04:10:18.664605 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:10:21 localhost.localdomain microshift[132400]: kubelet I0213 04:10:21.664214 132400 scope.go:115] "RemoveContainer" containerID="b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787" Feb 13 04:10:21 localhost.localdomain microshift[132400]: kubelet E0213 04:10:21.664376 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:10:22 localhost.localdomain microshift[132400]: kubelet I0213 04:10:22.664847 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:10:22 localhost.localdomain microshift[132400]: kubelet E0213 04:10:22.665087 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:10:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:23.287045 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:27 localhost.localdomain microshift[132400]: kubelet I0213 04:10:27.347789 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:27 localhost.localdomain microshift[132400]: kubelet I0213 04:10:27.348026 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:28.286519 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:30 localhost.localdomain microshift[132400]: kubelet I0213 04:10:30.348897 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:30 localhost.localdomain microshift[132400]: kubelet I0213 04:10:30.349304 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:30 localhost.localdomain microshift[132400]: kubelet I0213 04:10:30.665398 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:10:30 localhost.localdomain microshift[132400]: kubelet E0213 04:10:30.667437 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:10:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:33.286628 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:33 localhost.localdomain microshift[132400]: kubelet I0213 04:10:33.350072 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:33 localhost.localdomain microshift[132400]: kubelet I0213 04:10:33.350129 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:34 localhost.localdomain microshift[132400]: kubelet I0213 04:10:34.664195 132400 scope.go:115] "RemoveContainer" containerID="b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787" Feb 13 04:10:34 localhost.localdomain microshift[132400]: kubelet E0213 04:10:34.664380 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:10:35 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:10:35.843056 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:10:35 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:10:35.843076 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:10:36 localhost.localdomain microshift[132400]: kubelet I0213 04:10:36.350349 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:36 localhost.localdomain microshift[132400]: kubelet I0213 04:10:36.350390 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:37 localhost.localdomain microshift[132400]: kubelet I0213 04:10:37.663610 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:10:37 localhost.localdomain microshift[132400]: kubelet E0213 04:10:37.663976 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:10:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:38.286999 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:39 localhost.localdomain microshift[132400]: kubelet I0213 04:10:39.350720 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:39 localhost.localdomain microshift[132400]: kubelet I0213 04:10:39.351709 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:42 localhost.localdomain microshift[132400]: kubelet I0213 04:10:42.352511 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:42 localhost.localdomain microshift[132400]: kubelet I0213 04:10:42.352991 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:43.286492 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:44 localhost.localdomain microshift[132400]: kubelet I0213 04:10:44.607512 132400 kubelet.go:1409] "Image garbage collection succeeded" Feb 13 04:10:44 localhost.localdomain microshift[132400]: kubelet I0213 04:10:44.665177 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:10:44 localhost.localdomain microshift[132400]: kubelet E0213 04:10:44.665482 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:10:45 localhost.localdomain microshift[132400]: kubelet I0213 04:10:45.353319 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:45 localhost.localdomain microshift[132400]: kubelet I0213 04:10:45.353518 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:46 localhost.localdomain microshift[132400]: kubelet I0213 04:10:46.663753 132400 scope.go:115] "RemoveContainer" containerID="b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787" Feb 13 04:10:46 localhost.localdomain microshift[132400]: kubelet E0213 04:10:46.663909 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:10:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:48.286623 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:48 localhost.localdomain microshift[132400]: kubelet I0213 04:10:48.354339 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:48 localhost.localdomain microshift[132400]: kubelet I0213 04:10:48.354560 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:51 localhost.localdomain microshift[132400]: kubelet I0213 04:10:51.355692 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:51 localhost.localdomain microshift[132400]: kubelet I0213 04:10:51.355728 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:52 localhost.localdomain microshift[132400]: kubelet I0213 04:10:52.663641 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:10:52 localhost.localdomain microshift[132400]: kubelet E0213 04:10:52.664642 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:10:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:53.286718 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:54 localhost.localdomain microshift[132400]: kubelet I0213 04:10:54.356511 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:54 localhost.localdomain microshift[132400]: kubelet I0213 04:10:54.356943 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:55 localhost.localdomain microshift[132400]: kubelet I0213 04:10:55.668816 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:10:55 localhost.localdomain microshift[132400]: kubelet E0213 04:10:55.669127 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:10:57 localhost.localdomain microshift[132400]: kubelet I0213 04:10:57.357138 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:10:57 localhost.localdomain microshift[132400]: kubelet I0213 04:10:57.357568 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:10:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:10:58.286524 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:10:58 localhost.localdomain microshift[132400]: kubelet I0213 04:10:58.664266 132400 scope.go:115] "RemoveContainer" containerID="b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787" Feb 13 04:10:58 localhost.localdomain microshift[132400]: kubelet E0213 04:10:58.664429 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:11:00 localhost.localdomain microshift[132400]: kubelet I0213 04:11:00.358119 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:00 localhost.localdomain microshift[132400]: kubelet I0213 04:11:00.358210 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:03.287227 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:11:03 localhost.localdomain microshift[132400]: kubelet I0213 04:11:03.358579 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:03 localhost.localdomain microshift[132400]: kubelet I0213 04:11:03.358638 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:06 localhost.localdomain microshift[132400]: kubelet I0213 04:11:06.359665 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:06 localhost.localdomain microshift[132400]: kubelet I0213 04:11:06.359717 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:07 localhost.localdomain microshift[132400]: kubelet I0213 04:11:07.664266 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:11:07 localhost.localdomain microshift[132400]: kubelet E0213 04:11:07.664537 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:11:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:08.287055 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:11:09 localhost.localdomain microshift[132400]: kubelet I0213 04:11:09.360438 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:09 localhost.localdomain microshift[132400]: kubelet I0213 04:11:09.360498 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:10 localhost.localdomain microshift[132400]: kubelet I0213 04:11:10.664619 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:11:10 localhost.localdomain microshift[132400]: kubelet E0213 04:11:10.666083 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:11:12 localhost.localdomain microshift[132400]: kubelet I0213 04:11:12.360988 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:12 localhost.localdomain microshift[132400]: kubelet I0213 04:11:12.361243 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:13.286309 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:11:13 localhost.localdomain microshift[132400]: kubelet I0213 04:11:13.664041 132400 scope.go:115] "RemoveContainer" containerID="b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787" Feb 13 04:11:13 localhost.localdomain microshift[132400]: kubelet E0213 04:11:13.664223 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:11:13 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:11:13.863309 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:11:13 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:11:13.863338 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:11:15 localhost.localdomain microshift[132400]: kubelet I0213 04:11:15.362266 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:15 localhost.localdomain microshift[132400]: kubelet I0213 04:11:15.362317 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:18.286372 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:11:18 localhost.localdomain microshift[132400]: kubelet I0213 04:11:18.362913 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:18 localhost.localdomain microshift[132400]: kubelet I0213 04:11:18.362966 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:20 localhost.localdomain microshift[132400]: kubelet I0213 04:11:20.664407 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:11:20 localhost.localdomain microshift[132400]: kubelet E0213 04:11:20.664711 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:11:21 localhost.localdomain microshift[132400]: kubelet I0213 04:11:21.363306 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:21 localhost.localdomain microshift[132400]: kubelet I0213 04:11:21.363374 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:22 localhost.localdomain microshift[132400]: kubelet I0213 04:11:22.664112 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:11:22 localhost.localdomain microshift[132400]: kubelet E0213 04:11:22.665004 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:11:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:23.287111 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:11:24 localhost.localdomain microshift[132400]: kubelet I0213 04:11:24.364337 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:24 localhost.localdomain microshift[132400]: kubelet I0213 04:11:24.364965 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:24 localhost.localdomain microshift[132400]: kubelet I0213 04:11:24.632451 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:24 localhost.localdomain microshift[132400]: kubelet I0213 04:11:24.632741 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:27 localhost.localdomain microshift[132400]: kubelet I0213 04:11:27.365277 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:27 localhost.localdomain microshift[132400]: kubelet I0213 04:11:27.365638 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:28.286549 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:11:28 localhost.localdomain microshift[132400]: kubelet I0213 04:11:28.663326 132400 scope.go:115] "RemoveContainer" containerID="b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787" Feb 13 04:11:29 localhost.localdomain microshift[132400]: kubelet I0213 04:11:29.516748 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc} Feb 13 04:11:30 localhost.localdomain microshift[132400]: kubelet I0213 04:11:30.366174 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:30 localhost.localdomain microshift[132400]: kubelet I0213 04:11:30.366563 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:32 localhost.localdomain microshift[132400]: kubelet I0213 04:11:32.663839 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:11:32 localhost.localdomain microshift[132400]: kubelet E0213 04:11:32.664094 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:11:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:33.286408 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:11:33 localhost.localdomain microshift[132400]: kubelet I0213 04:11:33.366960 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:33 localhost.localdomain microshift[132400]: kubelet I0213 04:11:33.367031 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:33 localhost.localdomain microshift[132400]: kubelet I0213 04:11:33.664151 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:11:33 localhost.localdomain microshift[132400]: kubelet E0213 04:11:33.664744 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:11:34 localhost.localdomain microshift[132400]: kubelet I0213 04:11:34.631641 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:34 localhost.localdomain microshift[132400]: kubelet I0213 04:11:34.631728 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:35 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:11:35.817484 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:11:35 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:11:35.817678 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:11:36 localhost.localdomain microshift[132400]: kubelet I0213 04:11:36.367788 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:36 localhost.localdomain microshift[132400]: kubelet I0213 04:11:36.367829 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:38.287082 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:11:39 localhost.localdomain microshift[132400]: kubelet I0213 04:11:39.368550 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:39 localhost.localdomain microshift[132400]: kubelet I0213 04:11:39.369123 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:42 localhost.localdomain microshift[132400]: kubelet I0213 04:11:42.369909 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:42 localhost.localdomain microshift[132400]: kubelet I0213 04:11:42.369990 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:43.287063 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:11:44 localhost.localdomain microshift[132400]: kubelet I0213 04:11:44.632051 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:44 localhost.localdomain microshift[132400]: kubelet I0213 04:11:44.632329 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:44 localhost.localdomain microshift[132400]: kubelet I0213 04:11:44.664618 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:11:44 localhost.localdomain microshift[132400]: kubelet E0213 04:11:44.664919 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:11:45 localhost.localdomain microshift[132400]: kubelet I0213 04:11:45.370489 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:45 localhost.localdomain microshift[132400]: kubelet I0213 04:11:45.370534 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:46 localhost.localdomain microshift[132400]: kubelet I0213 04:11:46.664051 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:11:46 localhost.localdomain microshift[132400]: kubelet E0213 04:11:46.664319 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:11:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:48.287102 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:11:48 localhost.localdomain microshift[132400]: kubelet I0213 04:11:48.371444 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:48 localhost.localdomain microshift[132400]: kubelet I0213 04:11:48.371507 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:51 localhost.localdomain microshift[132400]: kubelet I0213 04:11:51.372090 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:51 localhost.localdomain microshift[132400]: kubelet I0213 04:11:51.372138 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:53.286927 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:11:54 localhost.localdomain microshift[132400]: kubelet I0213 04:11:54.373417 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:54 localhost.localdomain microshift[132400]: kubelet I0213 04:11:54.373827 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:54 localhost.localdomain microshift[132400]: kubelet I0213 04:11:54.630942 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:54 localhost.localdomain microshift[132400]: kubelet I0213 04:11:54.630992 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:54 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:11:54.656464 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:11:54 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:11:54.656724 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:11:57 localhost.localdomain microshift[132400]: kubelet I0213 04:11:57.374310 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:11:57 localhost.localdomain microshift[132400]: kubelet I0213 04:11:57.374383 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:11:57 localhost.localdomain microshift[132400]: kubelet I0213 04:11:57.663917 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:11:57 localhost.localdomain microshift[132400]: kubelet E0213 04:11:57.664219 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:11:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:11:58.286981 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:00 localhost.localdomain microshift[132400]: kubelet E0213 04:12:00.372883 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:12:00 localhost.localdomain microshift[132400]: kubelet E0213 04:12:00.373092 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:12:00 localhost.localdomain microshift[132400]: kubelet I0213 04:12:00.374535 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:00 localhost.localdomain microshift[132400]: kubelet I0213 04:12:00.374675 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:01 localhost.localdomain microshift[132400]: kubelet I0213 04:12:01.663976 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:12:02 localhost.localdomain microshift[132400]: kubelet I0213 04:12:02.569509 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245} Feb 13 04:12:02 localhost.localdomain microshift[132400]: kubelet I0213 04:12:02.987910 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:12:02 localhost.localdomain microshift[132400]: kubelet E0213 04:12:02.988356 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:14:04.988341949 -0500 EST m=+532.168688233 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:12:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:03.286881 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:03 localhost.localdomain microshift[132400]: kubelet I0213 04:12:03.375776 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:03 localhost.localdomain microshift[132400]: kubelet I0213 04:12:03.375971 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:03 localhost.localdomain microshift[132400]: kubelet I0213 04:12:03.571953 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" exitCode=255 Feb 13 04:12:03 localhost.localdomain microshift[132400]: kubelet I0213 04:12:03.571982 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc} Feb 13 04:12:03 localhost.localdomain microshift[132400]: kubelet I0213 04:12:03.572007 132400 scope.go:115] "RemoveContainer" containerID="b77fc9d9db69f9a34e53be7cc1967a8cb8cb2fe28ea4b2215ba03c2b7b6f4787" Feb 13 04:12:03 localhost.localdomain microshift[132400]: kubelet I0213 04:12:03.572208 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:12:03 localhost.localdomain microshift[132400]: kubelet E0213 04:12:03.572409 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:12:04 localhost.localdomain microshift[132400]: kubelet I0213 04:12:04.631688 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": dial tcp 10.42.0.7:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:04 localhost.localdomain microshift[132400]: kubelet I0213 04:12:04.631725 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": dial tcp 10.42.0.7:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:04 localhost.localdomain microshift[132400]: kubelet I0213 04:12:04.631752 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:12:04 localhost.localdomain microshift[132400]: kubelet I0213 04:12:04.632071 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:e79d1d2116be4b626b3a6b931b35ecc671e3164b2cd42c61457883c8252eda7d} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:12:04 localhost.localdomain microshift[132400]: kubelet I0213 04:12:04.632153 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://e79d1d2116be4b626b3a6b931b35ecc671e3164b2cd42c61457883c8252eda7d" gracePeriod=30 Feb 13 04:12:05 localhost.localdomain microshift[132400]: kubelet I0213 04:12:05.577463 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" exitCode=1 Feb 13 04:12:05 localhost.localdomain microshift[132400]: kubelet I0213 04:12:05.577776 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245} Feb 13 04:12:05 localhost.localdomain microshift[132400]: kubelet I0213 04:12:05.577823 132400 scope.go:115] "RemoveContainer" containerID="76d9d2e6ca5d81cdf6f7d787dcc70e728d12414935cc35b33751f5334581799b" Feb 13 04:12:05 localhost.localdomain microshift[132400]: kubelet I0213 04:12:05.578103 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:12:05 localhost.localdomain microshift[132400]: kubelet E0213 04:12:05.578382 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:12:06 localhost.localdomain microshift[132400]: kubelet I0213 04:12:06.376108 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:06 localhost.localdomain microshift[132400]: kubelet I0213 04:12:06.376148 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:08.286452 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:08 localhost.localdomain microshift[132400]: kubelet I0213 04:12:08.664024 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:12:09 localhost.localdomain microshift[132400]: kubelet I0213 04:12:09.377186 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:09 localhost.localdomain microshift[132400]: kubelet I0213 04:12:09.377561 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:09 localhost.localdomain microshift[132400]: kubelet I0213 04:12:09.587117 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4} Feb 13 04:12:09 localhost.localdomain microshift[132400]: kubelet I0213 04:12:09.587623 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:12:10 localhost.localdomain microshift[132400]: kubelet I0213 04:12:10.588093 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:10 localhost.localdomain microshift[132400]: kubelet I0213 04:12:10.588625 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:11 localhost.localdomain microshift[132400]: kubelet I0213 04:12:11.590489 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:11 localhost.localdomain microshift[132400]: kubelet I0213 04:12:11.590704 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:12 localhost.localdomain microshift[132400]: kubelet I0213 04:12:12.378280 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:12 localhost.localdomain microshift[132400]: kubelet I0213 04:12:12.378328 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:12 localhost.localdomain microshift[132400]: kubelet I0213 04:12:12.592919 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" exitCode=1 Feb 13 04:12:12 localhost.localdomain microshift[132400]: kubelet I0213 04:12:12.593193 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4} Feb 13 04:12:12 localhost.localdomain microshift[132400]: kubelet I0213 04:12:12.593292 132400 scope.go:115] "RemoveContainer" containerID="75a956fe2813c61ccdfe59919d030b9295218140c68a9a3aa0714e33f3292a2e" Feb 13 04:12:12 localhost.localdomain microshift[132400]: kubelet I0213 04:12:12.594333 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:12:12 localhost.localdomain microshift[132400]: kubelet E0213 04:12:12.594631 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:12:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:13.286777 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:13 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:12:13.606475 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:12:13 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:12:13.606878 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:12:15 localhost.localdomain microshift[132400]: kubelet I0213 04:12:15.378449 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:15 localhost.localdomain microshift[132400]: kubelet I0213 04:12:15.378537 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:16 localhost.localdomain microshift[132400]: kubelet I0213 04:12:16.664648 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:12:16 localhost.localdomain microshift[132400]: kubelet E0213 04:12:16.664935 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:12:17 localhost.localdomain microshift[132400]: kubelet I0213 04:12:17.664810 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:12:17 localhost.localdomain microshift[132400]: kubelet E0213 04:12:17.665366 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:12:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:18.286267 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:18 localhost.localdomain microshift[132400]: kubelet I0213 04:12:18.379793 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:18 localhost.localdomain microshift[132400]: kubelet I0213 04:12:18.379834 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:20 localhost.localdomain microshift[132400]: kubelet I0213 04:12:20.901474 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:12:20 localhost.localdomain microshift[132400]: kubelet I0213 04:12:20.902142 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:12:20 localhost.localdomain microshift[132400]: kubelet E0213 04:12:20.902729 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:12:21 localhost.localdomain microshift[132400]: kubelet I0213 04:12:21.379939 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:21 localhost.localdomain microshift[132400]: kubelet I0213 04:12:21.380087 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:23.287389 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:24 localhost.localdomain microshift[132400]: kubelet I0213 04:12:24.380413 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:24 localhost.localdomain microshift[132400]: kubelet I0213 04:12:24.380507 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:25 localhost.localdomain microshift[132400]: kubelet I0213 04:12:25.615564 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="e79d1d2116be4b626b3a6b931b35ecc671e3164b2cd42c61457883c8252eda7d" exitCode=0 Feb 13 04:12:25 localhost.localdomain microshift[132400]: kubelet I0213 04:12:25.615597 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:e79d1d2116be4b626b3a6b931b35ecc671e3164b2cd42c61457883c8252eda7d} Feb 13 04:12:25 localhost.localdomain microshift[132400]: kubelet I0213 04:12:25.615614 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:3fd54e0332d3e8e459deeb231dbaa24bf6582e2f94b22cb3fb8b9ea1bc0b0e79} Feb 13 04:12:25 localhost.localdomain microshift[132400]: kubelet I0213 04:12:25.615631 132400 scope.go:115] "RemoveContainer" containerID="c480a2c03c6a3de824aa03826b8a4f663d98fd932718f38eeef34f910078d8b0" Feb 13 04:12:26 localhost.localdomain microshift[132400]: kubelet I0213 04:12:26.192619 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:12:26 localhost.localdomain microshift[132400]: kubelet I0213 04:12:26.192950 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:12:26 localhost.localdomain microshift[132400]: kubelet E0213 04:12:26.193284 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:12:26 localhost.localdomain microshift[132400]: kubelet I0213 04:12:26.618118 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 04:12:27 localhost.localdomain microshift[132400]: kubelet I0213 04:12:27.381040 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:27 localhost.localdomain microshift[132400]: kubelet I0213 04:12:27.381094 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:27 localhost.localdomain microshift[132400]: kubelet I0213 04:12:27.381130 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:12:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:28.286541 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:30 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:12:30.421039 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:12:30 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:12:30.421333 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:12:30 localhost.localdomain microshift[132400]: kubelet I0213 04:12:30.664072 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:12:30 localhost.localdomain microshift[132400]: kubelet E0213 04:12:30.664267 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:12:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:33.286993 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:35 localhost.localdomain microshift[132400]: kubelet I0213 04:12:35.665035 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:12:35 localhost.localdomain microshift[132400]: kubelet E0213 04:12:35.665495 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:12:36 localhost.localdomain microshift[132400]: kubelet I0213 04:12:36.665093 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:12:36 localhost.localdomain microshift[132400]: kubelet E0213 04:12:36.665525 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:12:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:38.286225 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:39 localhost.localdomain microshift[132400]: kubelet I0213 04:12:39.348147 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:39 localhost.localdomain microshift[132400]: kubelet I0213 04:12:39.348192 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:42 localhost.localdomain microshift[132400]: kubelet I0213 04:12:42.348997 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:42 localhost.localdomain microshift[132400]: kubelet I0213 04:12:42.349042 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:43.286908 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:45 localhost.localdomain microshift[132400]: kubelet I0213 04:12:45.349703 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:45 localhost.localdomain microshift[132400]: kubelet I0213 04:12:45.349905 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:45 localhost.localdomain microshift[132400]: kubelet I0213 04:12:45.669520 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:12:45 localhost.localdomain microshift[132400]: kubelet E0213 04:12:45.669823 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:12:47 localhost.localdomain microshift[132400]: kubelet I0213 04:12:47.663644 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:12:47 localhost.localdomain microshift[132400]: kubelet E0213 04:12:47.663922 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:12:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:48.286977 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:48 localhost.localdomain microshift[132400]: kubelet I0213 04:12:48.350521 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:48 localhost.localdomain microshift[132400]: kubelet I0213 04:12:48.350564 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:48 localhost.localdomain microshift[132400]: kubelet I0213 04:12:48.665021 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:12:48 localhost.localdomain microshift[132400]: kubelet E0213 04:12:48.665492 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:12:51 localhost.localdomain microshift[132400]: kubelet I0213 04:12:51.350900 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:51 localhost.localdomain microshift[132400]: kubelet I0213 04:12:51.350936 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:53.287016 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:12:54 localhost.localdomain microshift[132400]: kubelet I0213 04:12:54.351897 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:54 localhost.localdomain microshift[132400]: kubelet I0213 04:12:54.351948 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:55 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:12:55.108053 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:12:55 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:12:55.108197 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:12:56 localhost.localdomain microshift[132400]: kubelet I0213 04:12:56.664552 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:12:56 localhost.localdomain microshift[132400]: kubelet E0213 04:12:56.664973 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:12:57 localhost.localdomain microshift[132400]: kubelet I0213 04:12:57.352803 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:12:57 localhost.localdomain microshift[132400]: kubelet I0213 04:12:57.353002 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:12:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:12:58.286240 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:00 localhost.localdomain microshift[132400]: kubelet I0213 04:13:00.353999 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:00 localhost.localdomain microshift[132400]: kubelet I0213 04:13:00.354071 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:02 localhost.localdomain microshift[132400]: kubelet I0213 04:13:02.664928 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:13:02 localhost.localdomain microshift[132400]: kubelet E0213 04:13:02.665409 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:13:02 localhost.localdomain microshift[132400]: kubelet I0213 04:13:02.665432 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:13:02 localhost.localdomain microshift[132400]: kubelet E0213 04:13:02.665749 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:13:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:03.287084 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:03 localhost.localdomain microshift[132400]: kubelet I0213 04:13:03.354349 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:03 localhost.localdomain microshift[132400]: kubelet I0213 04:13:03.354400 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:06 localhost.localdomain microshift[132400]: kubelet I0213 04:13:06.355377 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:06 localhost.localdomain microshift[132400]: kubelet I0213 04:13:06.355450 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:07 localhost.localdomain microshift[132400]: kubelet I0213 04:13:07.663235 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:13:07 localhost.localdomain microshift[132400]: kubelet E0213 04:13:07.663785 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:13:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:08.287169 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:09 localhost.localdomain microshift[132400]: kubelet I0213 04:13:09.356539 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:09 localhost.localdomain microshift[132400]: kubelet I0213 04:13:09.356939 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:12 localhost.localdomain microshift[132400]: kubelet I0213 04:13:12.357780 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:12 localhost.localdomain microshift[132400]: kubelet I0213 04:13:12.358289 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:13.286946 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:14 localhost.localdomain microshift[132400]: kubelet I0213 04:13:14.664231 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:13:14 localhost.localdomain microshift[132400]: kubelet E0213 04:13:14.664914 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:13:15 localhost.localdomain microshift[132400]: kubelet I0213 04:13:15.358989 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:15 localhost.localdomain microshift[132400]: kubelet I0213 04:13:15.359254 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:16 localhost.localdomain microshift[132400]: kubelet I0213 04:13:16.665933 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:13:16 localhost.localdomain microshift[132400]: kubelet E0213 04:13:16.666395 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:13:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:18.286829 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:18 localhost.localdomain microshift[132400]: kubelet I0213 04:13:18.360019 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:18 localhost.localdomain microshift[132400]: kubelet I0213 04:13:18.360076 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:18 localhost.localdomain microshift[132400]: kubelet I0213 04:13:18.665682 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:13:18 localhost.localdomain microshift[132400]: kubelet E0213 04:13:18.666514 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:13:21 localhost.localdomain microshift[132400]: kubelet I0213 04:13:21.361184 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:21 localhost.localdomain microshift[132400]: kubelet I0213 04:13:21.361232 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:23.286232 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:24 localhost.localdomain microshift[132400]: kubelet I0213 04:13:24.362156 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:24 localhost.localdomain microshift[132400]: kubelet I0213 04:13:24.362213 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:26 localhost.localdomain microshift[132400]: kubelet I0213 04:13:26.664922 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:13:26 localhost.localdomain microshift[132400]: kubelet E0213 04:13:26.665162 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:13:27 localhost.localdomain microshift[132400]: kubelet I0213 04:13:27.363298 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:27 localhost.localdomain microshift[132400]: kubelet I0213 04:13:27.363348 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:28.286397 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:28 localhost.localdomain microshift[132400]: kubelet I0213 04:13:28.663342 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:13:28 localhost.localdomain microshift[132400]: kubelet E0213 04:13:28.663955 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:13:28 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:13:28.972757 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:13:28 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:13:28.972783 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:13:30 localhost.localdomain microshift[132400]: kubelet I0213 04:13:30.364301 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:30 localhost.localdomain microshift[132400]: kubelet I0213 04:13:30.364743 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:32 localhost.localdomain microshift[132400]: kubelet I0213 04:13:32.664457 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:13:32 localhost.localdomain microshift[132400]: kubelet E0213 04:13:32.665071 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:13:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:33.287200 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:33 localhost.localdomain microshift[132400]: kubelet I0213 04:13:33.365554 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:33 localhost.localdomain microshift[132400]: kubelet I0213 04:13:33.365726 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:34 localhost.localdomain microshift[132400]: kubelet I0213 04:13:34.632373 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": dial tcp 10.42.0.7:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:34 localhost.localdomain microshift[132400]: kubelet I0213 04:13:34.632406 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": dial tcp 10.42.0.7:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:36 localhost.localdomain microshift[132400]: kubelet I0213 04:13:36.366170 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:36 localhost.localdomain microshift[132400]: kubelet I0213 04:13:36.366215 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:38.287075 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:39 localhost.localdomain microshift[132400]: kubelet I0213 04:13:39.367068 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:39 localhost.localdomain microshift[132400]: kubelet I0213 04:13:39.367389 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:39 localhost.localdomain microshift[132400]: kubelet I0213 04:13:39.664228 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:13:39 localhost.localdomain microshift[132400]: kubelet E0213 04:13:39.664535 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:13:40 localhost.localdomain microshift[132400]: kubelet I0213 04:13:40.664073 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:13:40 localhost.localdomain microshift[132400]: kubelet E0213 04:13:40.664998 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:13:42 localhost.localdomain microshift[132400]: kubelet I0213 04:13:42.368387 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:42 localhost.localdomain microshift[132400]: kubelet I0213 04:13:42.368449 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:43.286739 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:44 localhost.localdomain microshift[132400]: kubelet I0213 04:13:44.631863 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:44 localhost.localdomain microshift[132400]: kubelet I0213 04:13:44.632169 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:44 localhost.localdomain microshift[132400]: kubelet I0213 04:13:44.664174 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:13:44 localhost.localdomain microshift[132400]: kubelet E0213 04:13:44.664445 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:13:45 localhost.localdomain microshift[132400]: kubelet I0213 04:13:45.369376 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:45 localhost.localdomain microshift[132400]: kubelet I0213 04:13:45.369430 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:48.287107 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:48 localhost.localdomain microshift[132400]: kubelet I0213 04:13:48.370204 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:48 localhost.localdomain microshift[132400]: kubelet I0213 04:13:48.370375 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:51 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:13:51.329506 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:13:51 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:13:51.329791 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:13:51 localhost.localdomain microshift[132400]: kubelet I0213 04:13:51.370779 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:51 localhost.localdomain microshift[132400]: kubelet I0213 04:13:51.370926 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:51 localhost.localdomain microshift[132400]: kubelet I0213 04:13:51.664285 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:13:51 localhost.localdomain microshift[132400]: kubelet E0213 04:13:51.664597 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:13:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:53.286585 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:13:54 localhost.localdomain microshift[132400]: kubelet I0213 04:13:54.371505 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:54 localhost.localdomain microshift[132400]: kubelet I0213 04:13:54.371868 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:54 localhost.localdomain microshift[132400]: kubelet I0213 04:13:54.631803 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:54 localhost.localdomain microshift[132400]: kubelet I0213 04:13:54.632106 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:55 localhost.localdomain microshift[132400]: kubelet I0213 04:13:55.669166 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:13:55 localhost.localdomain microshift[132400]: kubelet E0213 04:13:55.669613 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:13:57 localhost.localdomain microshift[132400]: kubelet I0213 04:13:57.372986 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:13:57 localhost.localdomain microshift[132400]: kubelet I0213 04:13:57.373391 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:13:57 localhost.localdomain microshift[132400]: kubelet I0213 04:13:57.664117 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:13:57 localhost.localdomain microshift[132400]: kubelet E0213 04:13:57.664281 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:13:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:13:58.286252 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:00 localhost.localdomain microshift[132400]: kubelet I0213 04:14:00.374164 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:00 localhost.localdomain microshift[132400]: kubelet I0213 04:14:00.374228 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:03.286767 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:03 localhost.localdomain microshift[132400]: kubelet I0213 04:14:03.375011 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:03 localhost.localdomain microshift[132400]: kubelet I0213 04:14:03.375238 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:03 localhost.localdomain microshift[132400]: kubelet E0213 04:14:03.563676 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:14:03 localhost.localdomain microshift[132400]: kubelet E0213 04:14:03.563711 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:14:03 localhost.localdomain microshift[132400]: kubelet I0213 04:14:03.663376 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:14:03 localhost.localdomain microshift[132400]: kubelet E0213 04:14:03.663730 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:14:04 localhost.localdomain microshift[132400]: kubelet I0213 04:14:04.631984 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:04 localhost.localdomain microshift[132400]: kubelet I0213 04:14:04.632405 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:05 localhost.localdomain microshift[132400]: kubelet I0213 04:14:05.009684 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:14:05 localhost.localdomain microshift[132400]: kubelet E0213 04:14:05.009820 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:16:07.0098091 -0500 EST m=+654.190155368 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:14:06 localhost.localdomain microshift[132400]: kubelet I0213 04:14:06.376253 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:06 localhost.localdomain microshift[132400]: kubelet I0213 04:14:06.376305 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:08.286254 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:08 localhost.localdomain microshift[132400]: kubelet I0213 04:14:08.664230 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:14:08 localhost.localdomain microshift[132400]: kubelet E0213 04:14:08.664898 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:14:09 localhost.localdomain microshift[132400]: kubelet I0213 04:14:09.376770 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:09 localhost.localdomain microshift[132400]: kubelet I0213 04:14:09.376819 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:12 localhost.localdomain microshift[132400]: kubelet I0213 04:14:12.377493 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:12 localhost.localdomain microshift[132400]: kubelet I0213 04:14:12.377538 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:12 localhost.localdomain microshift[132400]: kubelet I0213 04:14:12.664797 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:14:12 localhost.localdomain microshift[132400]: kubelet E0213 04:14:12.664957 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:14:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:13.286872 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:14 localhost.localdomain microshift[132400]: kubelet I0213 04:14:14.631483 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:14 localhost.localdomain microshift[132400]: kubelet I0213 04:14:14.632209 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:14 localhost.localdomain microshift[132400]: kubelet I0213 04:14:14.632329 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:14:14 localhost.localdomain microshift[132400]: kubelet I0213 04:14:14.633010 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:3fd54e0332d3e8e459deeb231dbaa24bf6582e2f94b22cb3fb8b9ea1bc0b0e79} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:14:14 localhost.localdomain microshift[132400]: kubelet I0213 04:14:14.633307 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://3fd54e0332d3e8e459deeb231dbaa24bf6582e2f94b22cb3fb8b9ea1bc0b0e79" gracePeriod=30 Feb 13 04:14:15 localhost.localdomain microshift[132400]: kubelet I0213 04:14:15.377625 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:15 localhost.localdomain microshift[132400]: kubelet I0213 04:14:15.377862 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:17 localhost.localdomain microshift[132400]: kubelet I0213 04:14:17.663962 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:14:17 localhost.localdomain microshift[132400]: kubelet E0213 04:14:17.664390 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:14:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:18.287268 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:18 localhost.localdomain microshift[132400]: kubelet I0213 04:14:18.378422 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:18 localhost.localdomain microshift[132400]: kubelet I0213 04:14:18.378694 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:20 localhost.localdomain microshift[132400]: kubelet I0213 04:14:20.664386 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:14:20 localhost.localdomain microshift[132400]: kubelet E0213 04:14:20.665248 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:14:21 localhost.localdomain microshift[132400]: kubelet I0213 04:14:21.379050 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:21 localhost.localdomain microshift[132400]: kubelet I0213 04:14:21.379093 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:23.286265 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:24 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:14:24.158373 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:14:24 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:14:24.158517 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:14:24 localhost.localdomain microshift[132400]: kubelet I0213 04:14:24.379311 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:24 localhost.localdomain microshift[132400]: kubelet I0213 04:14:24.379990 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:24 localhost.localdomain microshift[132400]: kubelet I0213 04:14:24.664182 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:14:24 localhost.localdomain microshift[132400]: kubelet E0213 04:14:24.664350 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:14:26 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:14:26.433718 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:14:26 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:14:26.433741 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:14:27 localhost.localdomain microshift[132400]: kubelet I0213 04:14:27.380485 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:27 localhost.localdomain microshift[132400]: kubelet I0213 04:14:27.380696 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:28.286335 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:30 localhost.localdomain microshift[132400]: kubelet I0213 04:14:30.381697 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:30 localhost.localdomain microshift[132400]: kubelet I0213 04:14:30.382064 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:31 localhost.localdomain microshift[132400]: kubelet I0213 04:14:31.664308 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:14:31 localhost.localdomain microshift[132400]: kubelet E0213 04:14:31.664965 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:14:31 localhost.localdomain microshift[132400]: kubelet I0213 04:14:31.665006 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:14:31 localhost.localdomain microshift[132400]: kubelet E0213 04:14:31.665285 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:14:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:33.286190 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:33 localhost.localdomain microshift[132400]: kubelet I0213 04:14:33.382771 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:33 localhost.localdomain microshift[132400]: kubelet I0213 04:14:33.383080 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:34 localhost.localdomain microshift[132400]: kubelet I0213 04:14:34.805144 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="3fd54e0332d3e8e459deeb231dbaa24bf6582e2f94b22cb3fb8b9ea1bc0b0e79" exitCode=0 Feb 13 04:14:34 localhost.localdomain microshift[132400]: kubelet I0213 04:14:34.805170 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:3fd54e0332d3e8e459deeb231dbaa24bf6582e2f94b22cb3fb8b9ea1bc0b0e79} Feb 13 04:14:34 localhost.localdomain microshift[132400]: kubelet I0213 04:14:34.805191 132400 scope.go:115] "RemoveContainer" containerID="e79d1d2116be4b626b3a6b931b35ecc671e3164b2cd42c61457883c8252eda7d" Feb 13 04:14:35 localhost.localdomain microshift[132400]: kubelet I0213 04:14:35.663950 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:14:35 localhost.localdomain microshift[132400]: kubelet E0213 04:14:35.664156 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:14:35 localhost.localdomain microshift[132400]: kubelet I0213 04:14:35.815511 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d} Feb 13 04:14:36 localhost.localdomain microshift[132400]: kubelet I0213 04:14:36.383396 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:36 localhost.localdomain microshift[132400]: kubelet I0213 04:14:36.383447 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:36 localhost.localdomain microshift[132400]: kubelet I0213 04:14:36.383490 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:14:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:38.287054 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:43.286933 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:44 localhost.localdomain microshift[132400]: kubelet I0213 04:14:44.664597 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:14:44 localhost.localdomain microshift[132400]: kubelet E0213 04:14:44.665265 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:14:46 localhost.localdomain microshift[132400]: kubelet I0213 04:14:46.664782 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:14:46 localhost.localdomain microshift[132400]: kubelet E0213 04:14:46.665819 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:14:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:48.287060 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:48 localhost.localdomain microshift[132400]: kubelet I0213 04:14:48.346620 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:48 localhost.localdomain microshift[132400]: kubelet I0213 04:14:48.346879 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:50 localhost.localdomain microshift[132400]: kubelet I0213 04:14:50.664855 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:14:50 localhost.localdomain microshift[132400]: kubelet I0213 04:14:50.850004 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57} Feb 13 04:14:51 localhost.localdomain microshift[132400]: kubelet I0213 04:14:51.347069 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:51 localhost.localdomain microshift[132400]: kubelet I0213 04:14:51.347111 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:53.287014 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:14:54 localhost.localdomain microshift[132400]: kubelet I0213 04:14:54.348004 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:54 localhost.localdomain microshift[132400]: kubelet I0213 04:14:54.348337 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:56 localhost.localdomain microshift[132400]: kubelet I0213 04:14:56.665189 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:14:56 localhost.localdomain microshift[132400]: kubelet E0213 04:14:56.666434 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:14:57 localhost.localdomain microshift[132400]: kubelet I0213 04:14:57.348621 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:14:57 localhost.localdomain microshift[132400]: kubelet I0213 04:14:57.348875 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:14:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:14:58.286634 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:00 localhost.localdomain microshift[132400]: kubelet I0213 04:15:00.349684 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:00 localhost.localdomain microshift[132400]: kubelet I0213 04:15:00.349737 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:00 localhost.localdomain microshift[132400]: kubelet I0213 04:15:00.663928 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:15:00 localhost.localdomain microshift[132400]: kubelet E0213 04:15:00.664417 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:15:00 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:15:00.755282 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:15:00 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:15:00.755434 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:15:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:03.286525 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:03 localhost.localdomain microshift[132400]: kubelet I0213 04:15:03.350379 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:03 localhost.localdomain microshift[132400]: kubelet I0213 04:15:03.350432 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:04 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:15:04.256529 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:15:04 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:15:04.256560 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:15:06 localhost.localdomain microshift[132400]: kubelet I0213 04:15:06.351076 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:06 localhost.localdomain microshift[132400]: kubelet I0213 04:15:06.351129 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:08.287114 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:09 localhost.localdomain microshift[132400]: kubelet I0213 04:15:09.352159 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:09 localhost.localdomain microshift[132400]: kubelet I0213 04:15:09.352778 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:10 localhost.localdomain microshift[132400]: kubelet I0213 04:15:10.663674 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:15:10 localhost.localdomain microshift[132400]: kubelet E0213 04:15:10.664164 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:15:11 localhost.localdomain microshift[132400]: kubelet I0213 04:15:11.664152 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:15:11 localhost.localdomain microshift[132400]: kubelet E0213 04:15:11.664801 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:15:12 localhost.localdomain microshift[132400]: kubelet I0213 04:15:12.353295 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:12 localhost.localdomain microshift[132400]: kubelet I0213 04:15:12.353547 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:13.286555 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:15 localhost.localdomain microshift[132400]: kubelet I0213 04:15:15.355552 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:15 localhost.localdomain microshift[132400]: kubelet I0213 04:15:15.355892 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:18.287002 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:18 localhost.localdomain microshift[132400]: kubelet I0213 04:15:18.356934 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:18 localhost.localdomain microshift[132400]: kubelet I0213 04:15:18.357018 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:21 localhost.localdomain microshift[132400]: kubelet I0213 04:15:21.357979 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:21 localhost.localdomain microshift[132400]: kubelet I0213 04:15:21.358020 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:23.287177 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:23 localhost.localdomain microshift[132400]: kubelet I0213 04:15:23.663566 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:15:23 localhost.localdomain microshift[132400]: kubelet E0213 04:15:23.664030 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:15:24 localhost.localdomain microshift[132400]: kubelet I0213 04:15:24.358978 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:24 localhost.localdomain microshift[132400]: kubelet I0213 04:15:24.359349 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:24 localhost.localdomain microshift[132400]: kubelet I0213 04:15:24.664683 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:15:24 localhost.localdomain microshift[132400]: kubelet E0213 04:15:24.664995 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:15:25 localhost.localdomain microshift[132400]: kubelet I0213 04:15:25.901378 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" exitCode=255 Feb 13 04:15:25 localhost.localdomain microshift[132400]: kubelet I0213 04:15:25.901562 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57} Feb 13 04:15:25 localhost.localdomain microshift[132400]: kubelet I0213 04:15:25.901584 132400 scope.go:115] "RemoveContainer" containerID="91dc43b079a88f3ffa1f0ee5b899c00d0f38f28ab007493603a3d17b91d1b1dc" Feb 13 04:15:25 localhost.localdomain microshift[132400]: kubelet I0213 04:15:25.901805 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:15:25 localhost.localdomain microshift[132400]: kubelet E0213 04:15:25.901980 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:15:27 localhost.localdomain microshift[132400]: kubelet I0213 04:15:27.360397 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:27 localhost.localdomain microshift[132400]: kubelet I0213 04:15:27.360450 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:28.287093 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:30 localhost.localdomain microshift[132400]: kubelet I0213 04:15:30.361174 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:30 localhost.localdomain microshift[132400]: kubelet I0213 04:15:30.361240 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:33.287043 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:33 localhost.localdomain microshift[132400]: kubelet I0213 04:15:33.361348 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:33 localhost.localdomain microshift[132400]: kubelet I0213 04:15:33.361539 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:34 localhost.localdomain microshift[132400]: kubelet I0213 04:15:34.663556 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:15:34 localhost.localdomain microshift[132400]: kubelet E0213 04:15:34.664150 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:15:36 localhost.localdomain microshift[132400]: kubelet I0213 04:15:36.362459 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:36 localhost.localdomain microshift[132400]: kubelet I0213 04:15:36.362506 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:37 localhost.localdomain microshift[132400]: kubelet I0213 04:15:37.663837 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:15:37 localhost.localdomain microshift[132400]: kubelet E0213 04:15:37.664099 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:15:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:38.286640 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:39 localhost.localdomain microshift[132400]: kubelet I0213 04:15:39.363365 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:39 localhost.localdomain microshift[132400]: kubelet I0213 04:15:39.363431 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:39 localhost.localdomain microshift[132400]: kubelet I0213 04:15:39.663744 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:15:39 localhost.localdomain microshift[132400]: kubelet E0213 04:15:39.664130 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:15:42 localhost.localdomain microshift[132400]: kubelet I0213 04:15:42.363666 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:42 localhost.localdomain microshift[132400]: kubelet I0213 04:15:42.363958 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:43.287276 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:44 localhost.localdomain microshift[132400]: kubelet I0213 04:15:44.631201 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:44 localhost.localdomain microshift[132400]: kubelet I0213 04:15:44.631256 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:45 localhost.localdomain microshift[132400]: kubelet I0213 04:15:45.365010 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:45 localhost.localdomain microshift[132400]: kubelet I0213 04:15:45.365205 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:47 localhost.localdomain microshift[132400]: kubelet I0213 04:15:47.664195 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:15:47 localhost.localdomain microshift[132400]: kubelet E0213 04:15:47.664838 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:15:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:48.286325 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:48 localhost.localdomain microshift[132400]: kubelet I0213 04:15:48.365776 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:48 localhost.localdomain microshift[132400]: kubelet I0213 04:15:48.365951 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:49 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:15:49.863170 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:15:49 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:15:49.863795 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:15:49 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:15:49.938261 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:15:49 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:15:49.938286 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:15:50 localhost.localdomain microshift[132400]: kubelet I0213 04:15:50.665019 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:15:50 localhost.localdomain microshift[132400]: kubelet E0213 04:15:50.665425 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:15:50 localhost.localdomain microshift[132400]: kubelet I0213 04:15:50.665864 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:15:50 localhost.localdomain microshift[132400]: kubelet E0213 04:15:50.666583 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:15:51 localhost.localdomain microshift[132400]: kubelet I0213 04:15:51.366957 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:51 localhost.localdomain microshift[132400]: kubelet I0213 04:15:51.367162 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:53.287222 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:15:54 localhost.localdomain microshift[132400]: kubelet I0213 04:15:54.368092 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:54 localhost.localdomain microshift[132400]: kubelet I0213 04:15:54.368438 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:54 localhost.localdomain microshift[132400]: kubelet I0213 04:15:54.632258 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:54 localhost.localdomain microshift[132400]: kubelet I0213 04:15:54.632506 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:57 localhost.localdomain microshift[132400]: kubelet I0213 04:15:57.369588 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:15:57 localhost.localdomain microshift[132400]: kubelet I0213 04:15:57.369686 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:15:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:15:58.287375 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:00 localhost.localdomain microshift[132400]: kubelet I0213 04:16:00.369848 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:00 localhost.localdomain microshift[132400]: kubelet I0213 04:16:00.369903 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:00 localhost.localdomain microshift[132400]: kubelet I0213 04:16:00.663716 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:16:00 localhost.localdomain microshift[132400]: kubelet E0213 04:16:00.664174 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:16:02 localhost.localdomain microshift[132400]: kubelet I0213 04:16:02.664359 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:16:02 localhost.localdomain microshift[132400]: kubelet E0213 04:16:02.664697 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:16:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:03.286885 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:03 localhost.localdomain microshift[132400]: kubelet I0213 04:16:03.370230 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:03 localhost.localdomain microshift[132400]: kubelet I0213 04:16:03.370276 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:04 localhost.localdomain microshift[132400]: kubelet I0213 04:16:04.632185 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:04 localhost.localdomain microshift[132400]: kubelet I0213 04:16:04.632595 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:05 localhost.localdomain microshift[132400]: kubelet I0213 04:16:05.664955 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:16:05 localhost.localdomain microshift[132400]: kubelet E0213 04:16:05.665103 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:16:06 localhost.localdomain microshift[132400]: kubelet I0213 04:16:06.371399 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:06 localhost.localdomain microshift[132400]: kubelet I0213 04:16:06.371453 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:06 localhost.localdomain microshift[132400]: kubelet E0213 04:16:06.759506 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:16:06 localhost.localdomain microshift[132400]: kubelet E0213 04:16:06.759541 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:16:07 localhost.localdomain microshift[132400]: kubelet I0213 04:16:07.026782 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:16:07 localhost.localdomain microshift[132400]: kubelet E0213 04:16:07.027061 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:18:09.027049834 -0500 EST m=+776.207396112 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:16:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:08.286843 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:09 localhost.localdomain microshift[132400]: kubelet I0213 04:16:09.372266 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:09 localhost.localdomain microshift[132400]: kubelet I0213 04:16:09.372566 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:11 localhost.localdomain microshift[132400]: kubelet I0213 04:16:11.664263 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:16:11 localhost.localdomain microshift[132400]: kubelet E0213 04:16:11.664888 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:16:12 localhost.localdomain microshift[132400]: kubelet I0213 04:16:12.373087 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:12 localhost.localdomain microshift[132400]: kubelet I0213 04:16:12.373142 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:13.287211 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:14 localhost.localdomain microshift[132400]: kubelet I0213 04:16:14.632629 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": dial tcp 10.42.0.7:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:14 localhost.localdomain microshift[132400]: kubelet I0213 04:16:14.632686 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": dial tcp 10.42.0.7:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:15 localhost.localdomain microshift[132400]: kubelet I0213 04:16:15.373589 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:15 localhost.localdomain microshift[132400]: kubelet I0213 04:16:15.373649 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:16 localhost.localdomain microshift[132400]: kubelet I0213 04:16:16.665667 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:16:16 localhost.localdomain microshift[132400]: kubelet E0213 04:16:16.665940 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:16:16 localhost.localdomain microshift[132400]: kubelet I0213 04:16:16.666142 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:16:16 localhost.localdomain microshift[132400]: kubelet E0213 04:16:16.666478 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:16:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:18.287383 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:18 localhost.localdomain microshift[132400]: kubelet I0213 04:16:18.374534 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:18 localhost.localdomain microshift[132400]: kubelet I0213 04:16:18.374614 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:21 localhost.localdomain microshift[132400]: kubelet I0213 04:16:21.375706 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:21 localhost.localdomain microshift[132400]: kubelet I0213 04:16:21.375748 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:23.287162 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:24 localhost.localdomain microshift[132400]: kubelet I0213 04:16:24.376528 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:24 localhost.localdomain microshift[132400]: kubelet I0213 04:16:24.376578 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:24 localhost.localdomain microshift[132400]: kubelet I0213 04:16:24.631846 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:24 localhost.localdomain microshift[132400]: kubelet I0213 04:16:24.632087 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:24 localhost.localdomain microshift[132400]: kubelet I0213 04:16:24.632147 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:16:24 localhost.localdomain microshift[132400]: kubelet I0213 04:16:24.632505 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:16:24 localhost.localdomain microshift[132400]: kubelet I0213 04:16:24.632655 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" gracePeriod=30 Feb 13 04:16:26 localhost.localdomain microshift[132400]: kubelet I0213 04:16:26.664351 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:16:26 localhost.localdomain microshift[132400]: kubelet E0213 04:16:26.664835 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:16:27 localhost.localdomain microshift[132400]: kubelet I0213 04:16:27.376848 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:27 localhost.localdomain microshift[132400]: kubelet I0213 04:16:27.377070 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:27 localhost.localdomain microshift[132400]: kubelet I0213 04:16:27.663322 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:16:27 localhost.localdomain microshift[132400]: kubelet E0213 04:16:27.663503 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:16:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:28.286791 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:29 localhost.localdomain microshift[132400]: kubelet I0213 04:16:29.664396 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:16:29 localhost.localdomain microshift[132400]: kubelet E0213 04:16:29.665006 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:16:30 localhost.localdomain microshift[132400]: kubelet I0213 04:16:30.378090 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:30 localhost.localdomain microshift[132400]: kubelet I0213 04:16:30.378274 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:33.286590 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:33 localhost.localdomain microshift[132400]: kubelet I0213 04:16:33.379278 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:33 localhost.localdomain microshift[132400]: kubelet I0213 04:16:33.379506 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:36 localhost.localdomain microshift[132400]: kubelet I0213 04:16:36.380060 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:36 localhost.localdomain microshift[132400]: kubelet I0213 04:16:36.380168 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:38.286781 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:39 localhost.localdomain microshift[132400]: kubelet I0213 04:16:39.380639 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:39 localhost.localdomain microshift[132400]: kubelet I0213 04:16:39.380693 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:40 localhost.localdomain microshift[132400]: kubelet I0213 04:16:40.664170 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:16:40 localhost.localdomain microshift[132400]: kubelet E0213 04:16:40.664478 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:16:42 localhost.localdomain microshift[132400]: kubelet I0213 04:16:42.381700 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:42 localhost.localdomain microshift[132400]: kubelet I0213 04:16:42.382434 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:42 localhost.localdomain microshift[132400]: kubelet I0213 04:16:42.664422 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:16:42 localhost.localdomain microshift[132400]: kubelet E0213 04:16:42.665572 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:16:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:43.286807 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:44 localhost.localdomain microshift[132400]: kubelet I0213 04:16:44.669385 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:16:44 localhost.localdomain microshift[132400]: kubelet E0213 04:16:44.670322 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:16:44 localhost.localdomain microshift[132400]: kubelet E0213 04:16:44.752532 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:16:45 localhost.localdomain microshift[132400]: kubelet I0213 04:16:45.020137 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" exitCode=0 Feb 13 04:16:45 localhost.localdomain microshift[132400]: kubelet I0213 04:16:45.020162 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d} Feb 13 04:16:45 localhost.localdomain microshift[132400]: kubelet I0213 04:16:45.020183 132400 scope.go:115] "RemoveContainer" containerID="3fd54e0332d3e8e459deeb231dbaa24bf6582e2f94b22cb3fb8b9ea1bc0b0e79" Feb 13 04:16:45 localhost.localdomain microshift[132400]: kubelet I0213 04:16:45.020393 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:16:45 localhost.localdomain microshift[132400]: kubelet E0213 04:16:45.020590 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:16:45 localhost.localdomain microshift[132400]: kubelet I0213 04:16:45.382935 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:16:45 localhost.localdomain microshift[132400]: kubelet I0213 04:16:45.383223 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:16:47 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:16:47.194843 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:16:47 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:16:47.195201 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:16:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:48.286963 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:49 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:16:49.318213 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:16:49 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:16:49.318243 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:16:51 localhost.localdomain microshift[132400]: kubelet I0213 04:16:51.664300 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:16:51 localhost.localdomain microshift[132400]: kubelet E0213 04:16:51.664796 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:16:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:53.286895 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:16:55 localhost.localdomain microshift[132400]: kubelet I0213 04:16:55.669524 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:16:55 localhost.localdomain microshift[132400]: kubelet E0213 04:16:55.669926 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:16:56 localhost.localdomain microshift[132400]: kubelet I0213 04:16:56.663904 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:16:56 localhost.localdomain microshift[132400]: kubelet E0213 04:16:56.664687 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:16:56 localhost.localdomain microshift[132400]: kubelet I0213 04:16:56.668734 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:16:56 localhost.localdomain microshift[132400]: kubelet E0213 04:16:56.669109 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:16:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:16:58.286756 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:03.286827 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:06 localhost.localdomain microshift[132400]: kubelet I0213 04:17:06.664983 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:17:06 localhost.localdomain microshift[132400]: kubelet E0213 04:17:06.665529 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:17:07 localhost.localdomain microshift[132400]: kubelet I0213 04:17:07.664048 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:17:07 localhost.localdomain microshift[132400]: kubelet E0213 04:17:07.664469 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:17:07 localhost.localdomain microshift[132400]: kubelet I0213 04:17:07.664912 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:17:08 localhost.localdomain microshift[132400]: kubelet I0213 04:17:08.065777 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c} Feb 13 04:17:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:08.287219 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:10 localhost.localdomain microshift[132400]: kubelet I0213 04:17:10.664468 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:17:10 localhost.localdomain microshift[132400]: kubelet E0213 04:17:10.664755 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:17:11 localhost.localdomain microshift[132400]: kubelet I0213 04:17:11.071411 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" exitCode=1 Feb 13 04:17:11 localhost.localdomain microshift[132400]: kubelet I0213 04:17:11.071644 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c} Feb 13 04:17:11 localhost.localdomain microshift[132400]: kubelet I0213 04:17:11.071744 132400 scope.go:115] "RemoveContainer" containerID="7f05d8df0f639ab640173f305e3c8f2998469c4b528513c97e500e06c8e9f245" Feb 13 04:17:11 localhost.localdomain microshift[132400]: kubelet I0213 04:17:11.072051 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:17:11 localhost.localdomain microshift[132400]: kubelet E0213 04:17:11.072346 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:17:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:13.286551 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:18.287163 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:19 localhost.localdomain microshift[132400]: kubelet I0213 04:17:19.663747 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:17:20 localhost.localdomain microshift[132400]: kubelet I0213 04:17:20.087801 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d} Feb 13 04:17:20 localhost.localdomain microshift[132400]: kubelet I0213 04:17:20.088805 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:17:20 localhost.localdomain microshift[132400]: kubelet I0213 04:17:20.901935 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:17:20 localhost.localdomain microshift[132400]: kubelet I0213 04:17:20.902323 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:17:20 localhost.localdomain microshift[132400]: kubelet E0213 04:17:20.902553 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:17:21 localhost.localdomain microshift[132400]: kubelet I0213 04:17:21.088719 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:17:21 localhost.localdomain microshift[132400]: kubelet I0213 04:17:21.089091 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:17:21 localhost.localdomain microshift[132400]: kubelet I0213 04:17:21.664093 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:17:21 localhost.localdomain microshift[132400]: kubelet E0213 04:17:21.664499 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:17:22 localhost.localdomain microshift[132400]: kubelet I0213 04:17:22.091391 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:17:22 localhost.localdomain microshift[132400]: kubelet I0213 04:17:22.091997 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:17:22 localhost.localdomain microshift[132400]: kubelet I0213 04:17:22.663485 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:17:22 localhost.localdomain microshift[132400]: kubelet E0213 04:17:22.663675 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:17:23 localhost.localdomain microshift[132400]: kubelet I0213 04:17:23.094693 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" exitCode=1 Feb 13 04:17:23 localhost.localdomain microshift[132400]: kubelet I0213 04:17:23.094723 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d} Feb 13 04:17:23 localhost.localdomain microshift[132400]: kubelet I0213 04:17:23.094744 132400 scope.go:115] "RemoveContainer" containerID="289ba1e3084d0cbc2e43814fadf6e2b2153b62ffd7bb9202756444d72c5b60e4" Feb 13 04:17:23 localhost.localdomain microshift[132400]: kubelet I0213 04:17:23.094971 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:17:23 localhost.localdomain microshift[132400]: kubelet E0213 04:17:23.095243 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:17:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:23.286735 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:26 localhost.localdomain microshift[132400]: kubelet I0213 04:17:26.192862 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:17:26 localhost.localdomain microshift[132400]: kubelet I0213 04:17:26.193751 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:17:26 localhost.localdomain microshift[132400]: kubelet E0213 04:17:26.194175 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:17:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:28.287119 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:33.286387 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:33 localhost.localdomain microshift[132400]: kubelet I0213 04:17:33.664191 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:17:33 localhost.localdomain microshift[132400]: kubelet I0213 04:17:33.664522 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:17:33 localhost.localdomain microshift[132400]: kubelet E0213 04:17:33.664845 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:17:33 localhost.localdomain microshift[132400]: kubelet E0213 04:17:33.665010 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:17:35 localhost.localdomain microshift[132400]: kubelet I0213 04:17:35.664747 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:17:35 localhost.localdomain microshift[132400]: kubelet E0213 04:17:35.665723 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:17:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:38.286230 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:38 localhost.localdomain microshift[132400]: kubelet I0213 04:17:38.663964 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:17:38 localhost.localdomain microshift[132400]: kubelet E0213 04:17:38.664278 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:17:39 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:17:39.949863 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:17:39 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:17:39.949887 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:17:43 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:17:43.073124 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:17:43 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:17:43.073145 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:17:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:43.286719 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:48.287196 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:48 localhost.localdomain microshift[132400]: kubelet I0213 04:17:48.664593 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:17:48 localhost.localdomain microshift[132400]: kubelet E0213 04:17:48.664900 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:17:48 localhost.localdomain microshift[132400]: kubelet I0213 04:17:48.665449 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:17:48 localhost.localdomain microshift[132400]: kubelet E0213 04:17:48.665706 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:17:49 localhost.localdomain microshift[132400]: kubelet I0213 04:17:49.663862 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:17:49 localhost.localdomain microshift[132400]: kubelet E0213 04:17:49.664116 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:17:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:53.286607 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:17:53 localhost.localdomain microshift[132400]: kubelet I0213 04:17:53.663791 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:17:53 localhost.localdomain microshift[132400]: kubelet E0213 04:17:53.664470 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:17:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:17:58.287418 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:01 localhost.localdomain microshift[132400]: kubelet I0213 04:18:01.663309 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:18:01 localhost.localdomain microshift[132400]: kubelet E0213 04:18:01.664027 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:18:02 localhost.localdomain microshift[132400]: kubelet I0213 04:18:02.664039 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:18:02 localhost.localdomain microshift[132400]: kubelet E0213 04:18:02.664226 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:18:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:03.286987 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:04 localhost.localdomain microshift[132400]: kubelet I0213 04:18:04.665175 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:18:04 localhost.localdomain microshift[132400]: kubelet E0213 04:18:04.665435 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:18:04 localhost.localdomain microshift[132400]: kubelet I0213 04:18:04.666071 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:18:04 localhost.localdomain microshift[132400]: kubelet E0213 04:18:04.666480 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:18:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:08.286232 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:09 localhost.localdomain microshift[132400]: kubelet I0213 04:18:09.116005 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:18:09 localhost.localdomain microshift[132400]: kubelet E0213 04:18:09.116157 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:20:11.116144614 -0500 EST m=+898.296490892 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:18:09 localhost.localdomain microshift[132400]: kubelet E0213 04:18:09.964265 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:18:09 localhost.localdomain microshift[132400]: kubelet E0213 04:18:09.964300 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:18:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:13.286968 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:16 localhost.localdomain microshift[132400]: kubelet I0213 04:18:16.664046 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:18:16 localhost.localdomain microshift[132400]: kubelet E0213 04:18:16.664319 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:18:16 localhost.localdomain microshift[132400]: kubelet I0213 04:18:16.664542 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:18:16 localhost.localdomain microshift[132400]: kubelet E0213 04:18:16.664734 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:18:16 localhost.localdomain microshift[132400]: kubelet I0213 04:18:16.664912 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:18:16 localhost.localdomain microshift[132400]: kubelet E0213 04:18:16.665075 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:18:17 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:18:17.281524 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:18:17 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:18:17.281741 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:18:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:18.286821 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:19 localhost.localdomain microshift[132400]: kubelet I0213 04:18:19.663786 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:18:19 localhost.localdomain microshift[132400]: kubelet E0213 04:18:19.664123 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:18:20 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:18:20.423203 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:18:20 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:18:20.423355 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:18:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:23.286770 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:28.286693 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:28 localhost.localdomain microshift[132400]: kubelet I0213 04:18:28.664560 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:18:28 localhost.localdomain microshift[132400]: kubelet E0213 04:18:28.665071 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:18:30 localhost.localdomain microshift[132400]: kubelet I0213 04:18:30.663963 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:18:30 localhost.localdomain microshift[132400]: kubelet E0213 04:18:30.664863 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:18:31 localhost.localdomain microshift[132400]: kubelet I0213 04:18:31.664316 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:18:31 localhost.localdomain microshift[132400]: kubelet E0213 04:18:31.665047 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:18:32 localhost.localdomain microshift[132400]: kubelet I0213 04:18:32.664207 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:18:32 localhost.localdomain microshift[132400]: kubelet E0213 04:18:32.664644 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:18:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:33.286445 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:38.286567 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:42 localhost.localdomain microshift[132400]: kubelet I0213 04:18:42.664010 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:18:42 localhost.localdomain microshift[132400]: kubelet E0213 04:18:42.664371 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:18:42 localhost.localdomain microshift[132400]: kubelet I0213 04:18:42.665025 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:18:42 localhost.localdomain microshift[132400]: kubelet E0213 04:18:42.665256 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:18:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:43.287215 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:44 localhost.localdomain microshift[132400]: kubelet I0213 04:18:44.664522 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:18:44 localhost.localdomain microshift[132400]: kubelet E0213 04:18:44.665069 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:18:45 localhost.localdomain microshift[132400]: kubelet I0213 04:18:45.674133 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:18:45 localhost.localdomain microshift[132400]: kubelet E0213 04:18:45.675402 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:18:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:48.286205 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:53.286341 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:18:55 localhost.localdomain microshift[132400]: kubelet I0213 04:18:55.671555 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:18:55 localhost.localdomain microshift[132400]: kubelet E0213 04:18:55.675435 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:18:56 localhost.localdomain microshift[132400]: kubelet I0213 04:18:56.667362 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:18:56 localhost.localdomain microshift[132400]: kubelet E0213 04:18:56.667941 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:18:57 localhost.localdomain microshift[132400]: kubelet I0213 04:18:57.663536 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:18:57 localhost.localdomain microshift[132400]: kubelet E0213 04:18:57.663760 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:18:57 localhost.localdomain microshift[132400]: kubelet I0213 04:18:57.664095 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:18:57 localhost.localdomain microshift[132400]: kubelet E0213 04:18:57.664343 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:18:58 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:18:58.204128 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:18:58 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:18:58.204156 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:18:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:18:58.286432 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:03.286574 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:08.286620 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:08 localhost.localdomain microshift[132400]: kubelet I0213 04:19:08.663922 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:19:08 localhost.localdomain microshift[132400]: kubelet E0213 04:19:08.664275 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:19:09 localhost.localdomain microshift[132400]: kubelet I0213 04:19:09.664029 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:19:09 localhost.localdomain microshift[132400]: kubelet E0213 04:19:09.664318 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:19:10 localhost.localdomain microshift[132400]: kubelet I0213 04:19:10.663639 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:19:10 localhost.localdomain microshift[132400]: kubelet E0213 04:19:10.664231 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:19:11 localhost.localdomain microshift[132400]: kubelet I0213 04:19:11.664215 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:19:11 localhost.localdomain microshift[132400]: kubelet E0213 04:19:11.664472 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:19:12 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:19:12.294382 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:19:12 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:19:12.294586 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:19:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:13.286818 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:18.287249 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:21 localhost.localdomain microshift[132400]: kubelet I0213 04:19:21.663941 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:19:21 localhost.localdomain microshift[132400]: kubelet E0213 04:19:21.664408 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:19:22 localhost.localdomain microshift[132400]: kubelet I0213 04:19:22.663975 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:19:22 localhost.localdomain microshift[132400]: kubelet E0213 04:19:22.664505 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:19:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:23.286962 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:23 localhost.localdomain microshift[132400]: kubelet I0213 04:19:23.663982 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:19:23 localhost.localdomain microshift[132400]: kubelet E0213 04:19:23.664423 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:19:24 localhost.localdomain microshift[132400]: kubelet I0213 04:19:24.664150 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:19:24 localhost.localdomain microshift[132400]: kubelet E0213 04:19:24.664400 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:19:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:28.286305 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:33.287240 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:34 localhost.localdomain microshift[132400]: kubelet I0213 04:19:34.663684 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:19:34 localhost.localdomain microshift[132400]: kubelet I0213 04:19:34.664139 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:19:34 localhost.localdomain microshift[132400]: kubelet E0213 04:19:34.664380 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:19:35 localhost.localdomain microshift[132400]: kubelet I0213 04:19:35.292302 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:a58c152147c0592ed31b0aec1e3ffed38a6cb198c752d10a4553e48c1dbd4e83} Feb 13 04:19:35 localhost.localdomain microshift[132400]: kubelet I0213 04:19:35.292825 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:19:36 localhost.localdomain microshift[132400]: kubelet I0213 04:19:36.663490 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:19:36 localhost.localdomain microshift[132400]: kubelet E0213 04:19:36.664061 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:19:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:38.286235 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:38 localhost.localdomain microshift[132400]: kubelet I0213 04:19:38.664089 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:19:38 localhost.localdomain microshift[132400]: kubelet E0213 04:19:38.664652 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:19:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:43.286494 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:44 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:19:44.517893 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:19:44 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:19:44.518183 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:19:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:48.286237 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:48 localhost.localdomain microshift[132400]: kubelet I0213 04:19:48.346938 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:19:48 localhost.localdomain microshift[132400]: kubelet I0213 04:19:48.346982 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:19:48 localhost.localdomain microshift[132400]: kubelet I0213 04:19:48.664098 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:19:48 localhost.localdomain microshift[132400]: kubelet E0213 04:19:48.664992 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:19:49 localhost.localdomain microshift[132400]: kubelet I0213 04:19:49.664019 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:19:49 localhost.localdomain microshift[132400]: kubelet E0213 04:19:49.664172 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:19:51 localhost.localdomain microshift[132400]: kubelet I0213 04:19:51.347120 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:19:51 localhost.localdomain microshift[132400]: kubelet I0213 04:19:51.347160 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:19:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:53.286296 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:53 localhost.localdomain microshift[132400]: kubelet I0213 04:19:53.663579 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:19:53 localhost.localdomain microshift[132400]: kubelet E0213 04:19:53.664074 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:19:54 localhost.localdomain microshift[132400]: kubelet I0213 04:19:54.347357 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:19:54 localhost.localdomain microshift[132400]: kubelet I0213 04:19:54.347741 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:19:57 localhost.localdomain microshift[132400]: kubelet I0213 04:19:57.348056 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:19:57 localhost.localdomain microshift[132400]: kubelet I0213 04:19:57.348544 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:19:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:19:58.286201 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:19:59 localhost.localdomain microshift[132400]: kubelet I0213 04:19:59.663751 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:19:59 localhost.localdomain microshift[132400]: kubelet E0213 04:19:59.664926 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:20:00 localhost.localdomain microshift[132400]: kubelet I0213 04:20:00.349518 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:00 localhost.localdomain microshift[132400]: kubelet I0213 04:20:00.349681 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:00 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:20:00.595367 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:20:00 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:20:00.595389 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:20:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:03.287091 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:20:03 localhost.localdomain microshift[132400]: kubelet I0213 04:20:03.350595 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:03 localhost.localdomain microshift[132400]: kubelet I0213 04:20:03.350793 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:04 localhost.localdomain microshift[132400]: kubelet I0213 04:20:04.663964 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:20:04 localhost.localdomain microshift[132400]: kubelet E0213 04:20:04.664665 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:20:06 localhost.localdomain microshift[132400]: kubelet I0213 04:20:06.351324 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:06 localhost.localdomain microshift[132400]: kubelet I0213 04:20:06.351365 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:08.286493 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:20:08 localhost.localdomain microshift[132400]: kubelet I0213 04:20:08.664110 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:20:08 localhost.localdomain microshift[132400]: kubelet E0213 04:20:08.664627 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:20:09 localhost.localdomain microshift[132400]: kubelet I0213 04:20:09.352283 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:09 localhost.localdomain microshift[132400]: kubelet I0213 04:20:09.352665 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:10 localhost.localdomain microshift[132400]: kubelet I0213 04:20:10.664284 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:20:10 localhost.localdomain microshift[132400]: kubelet E0213 04:20:10.665061 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:20:11 localhost.localdomain microshift[132400]: kubelet I0213 04:20:11.201930 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:20:11 localhost.localdomain microshift[132400]: kubelet E0213 04:20:11.202082 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:22:13.20207118 -0500 EST m=+1020.382417458 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:20:12 localhost.localdomain microshift[132400]: kubelet I0213 04:20:12.353272 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:12 localhost.localdomain microshift[132400]: kubelet I0213 04:20:12.353304 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:13 localhost.localdomain microshift[132400]: kubelet E0213 04:20:13.170023 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:20:13 localhost.localdomain microshift[132400]: kubelet E0213 04:20:13.170714 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:20:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:13.287239 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:20:15 localhost.localdomain microshift[132400]: kubelet I0213 04:20:15.354221 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:15 localhost.localdomain microshift[132400]: kubelet I0213 04:20:15.354256 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:15 localhost.localdomain microshift[132400]: kubelet I0213 04:20:15.665122 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:20:15 localhost.localdomain microshift[132400]: kubelet E0213 04:20:15.665272 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:20:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:18.286654 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:20:18 localhost.localdomain microshift[132400]: kubelet I0213 04:20:18.354698 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:18 localhost.localdomain microshift[132400]: kubelet I0213 04:20:18.354741 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:20 localhost.localdomain microshift[132400]: kubelet I0213 04:20:20.664149 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:20:20 localhost.localdomain microshift[132400]: kubelet E0213 04:20:20.665224 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:20:21 localhost.localdomain microshift[132400]: kubelet I0213 04:20:21.355125 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:21 localhost.localdomain microshift[132400]: kubelet I0213 04:20:21.355163 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:23.286766 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:20:24 localhost.localdomain microshift[132400]: kubelet I0213 04:20:24.355595 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:24 localhost.localdomain microshift[132400]: kubelet I0213 04:20:24.355935 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:24 localhost.localdomain microshift[132400]: kubelet I0213 04:20:24.664307 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:20:24 localhost.localdomain microshift[132400]: kubelet E0213 04:20:24.664640 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:20:26 localhost.localdomain microshift[132400]: kubelet I0213 04:20:26.665815 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:20:27 localhost.localdomain microshift[132400]: kubelet I0213 04:20:27.356931 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:27 localhost.localdomain microshift[132400]: kubelet I0213 04:20:27.356980 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:27 localhost.localdomain microshift[132400]: kubelet I0213 04:20:27.369727 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954} Feb 13 04:20:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:28.286779 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:20:30 localhost.localdomain microshift[132400]: kubelet I0213 04:20:30.357683 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:30 localhost.localdomain microshift[132400]: kubelet I0213 04:20:30.358111 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:32 localhost.localdomain microshift[132400]: kubelet I0213 04:20:32.664032 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:20:32 localhost.localdomain microshift[132400]: kubelet E0213 04:20:32.664787 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:20:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:33.287445 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:20:33 localhost.localdomain microshift[132400]: kubelet I0213 04:20:33.359277 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:33 localhost.localdomain microshift[132400]: kubelet I0213 04:20:33.359324 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:35 localhost.localdomain microshift[132400]: kubelet I0213 04:20:35.664052 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:20:35 localhost.localdomain microshift[132400]: kubelet E0213 04:20:35.664492 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:20:36 localhost.localdomain microshift[132400]: kubelet I0213 04:20:36.359938 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:36 localhost.localdomain microshift[132400]: kubelet I0213 04:20:36.359993 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:37 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:20:37.676600 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:20:37 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:20:37.676978 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:20:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:38.286962 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:20:39 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:20:39.231826 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:20:39 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:20:39.231851 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:20:39 localhost.localdomain microshift[132400]: kubelet I0213 04:20:39.360275 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:39 localhost.localdomain microshift[132400]: kubelet I0213 04:20:39.360347 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:42 localhost.localdomain microshift[132400]: kubelet I0213 04:20:42.360917 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:42 localhost.localdomain microshift[132400]: kubelet I0213 04:20:42.361439 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:43.287255 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:20:44 localhost.localdomain microshift[132400]: kubelet I0213 04:20:44.632495 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:44 localhost.localdomain microshift[132400]: kubelet I0213 04:20:44.632539 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:45 localhost.localdomain microshift[132400]: kubelet I0213 04:20:45.362251 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:45 localhost.localdomain microshift[132400]: kubelet I0213 04:20:45.362296 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:45 localhost.localdomain microshift[132400]: kubelet I0213 04:20:45.663834 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:20:45 localhost.localdomain microshift[132400]: kubelet E0213 04:20:45.664211 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:20:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:48.287226 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:20:48 localhost.localdomain microshift[132400]: kubelet I0213 04:20:48.363449 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:48 localhost.localdomain microshift[132400]: kubelet I0213 04:20:48.363707 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:49 localhost.localdomain microshift[132400]: kubelet I0213 04:20:49.663294 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:20:49 localhost.localdomain microshift[132400]: kubelet E0213 04:20:49.663668 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:20:51 localhost.localdomain microshift[132400]: kubelet I0213 04:20:51.363997 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:51 localhost.localdomain microshift[132400]: kubelet I0213 04:20:51.364040 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:53.286500 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:20:54 localhost.localdomain microshift[132400]: kubelet I0213 04:20:54.364891 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:54 localhost.localdomain microshift[132400]: kubelet I0213 04:20:54.364946 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:54 localhost.localdomain microshift[132400]: kubelet I0213 04:20:54.631118 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:54 localhost.localdomain microshift[132400]: kubelet I0213 04:20:54.631173 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:56 localhost.localdomain microshift[132400]: kubelet I0213 04:20:56.670014 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:20:56 localhost.localdomain microshift[132400]: kubelet E0213 04:20:56.671038 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:20:57 localhost.localdomain microshift[132400]: kubelet I0213 04:20:57.365317 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:20:57 localhost.localdomain microshift[132400]: kubelet I0213 04:20:57.365365 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:20:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:20:58.287049 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:00 localhost.localdomain microshift[132400]: kubelet I0213 04:21:00.366115 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:00 localhost.localdomain microshift[132400]: kubelet I0213 04:21:00.366171 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:01 localhost.localdomain microshift[132400]: kubelet I0213 04:21:01.418888 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" exitCode=255 Feb 13 04:21:01 localhost.localdomain microshift[132400]: kubelet I0213 04:21:01.418912 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954} Feb 13 04:21:01 localhost.localdomain microshift[132400]: kubelet I0213 04:21:01.418931 132400 scope.go:115] "RemoveContainer" containerID="fa09a904e63de31d525a4104abbdb6e0e8aa3692dcc982d1581f3918a9798b57" Feb 13 04:21:01 localhost.localdomain microshift[132400]: kubelet I0213 04:21:01.419125 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:21:01 localhost.localdomain microshift[132400]: kubelet E0213 04:21:01.419255 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:21:02 localhost.localdomain microshift[132400]: kubelet I0213 04:21:02.663378 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:21:02 localhost.localdomain microshift[132400]: kubelet E0213 04:21:02.663779 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:21:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:03.286936 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:03 localhost.localdomain microshift[132400]: kubelet I0213 04:21:03.366469 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:03 localhost.localdomain microshift[132400]: kubelet I0213 04:21:03.366704 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:04 localhost.localdomain microshift[132400]: kubelet I0213 04:21:04.632339 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:04 localhost.localdomain microshift[132400]: kubelet I0213 04:21:04.632394 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:06 localhost.localdomain microshift[132400]: kubelet I0213 04:21:06.367199 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:06 localhost.localdomain microshift[132400]: kubelet I0213 04:21:06.367261 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:08.287805 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:08 localhost.localdomain microshift[132400]: kubelet I0213 04:21:08.664634 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:21:08 localhost.localdomain microshift[132400]: kubelet E0213 04:21:08.665132 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:21:09 localhost.localdomain microshift[132400]: kubelet I0213 04:21:09.367641 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:09 localhost.localdomain microshift[132400]: kubelet I0213 04:21:09.367711 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:12 localhost.localdomain microshift[132400]: kubelet I0213 04:21:12.368597 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:12 localhost.localdomain microshift[132400]: kubelet I0213 04:21:12.369269 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:13.287186 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:13 localhost.localdomain microshift[132400]: kubelet I0213 04:21:13.664128 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:21:13 localhost.localdomain microshift[132400]: kubelet E0213 04:21:13.664302 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:21:14 localhost.localdomain microshift[132400]: kubelet I0213 04:21:14.631846 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:14 localhost.localdomain microshift[132400]: kubelet I0213 04:21:14.632007 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:14 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:21:14.894147 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:21:14 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:21:14.894478 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:21:15 localhost.localdomain microshift[132400]: kubelet I0213 04:21:15.370025 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:15 localhost.localdomain microshift[132400]: kubelet I0213 04:21:15.370262 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:15 localhost.localdomain microshift[132400]: kubelet I0213 04:21:15.669391 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:21:15 localhost.localdomain microshift[132400]: kubelet E0213 04:21:15.669844 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:21:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:18.286909 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:18 localhost.localdomain microshift[132400]: kubelet I0213 04:21:18.370873 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:18 localhost.localdomain microshift[132400]: kubelet I0213 04:21:18.371037 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:21 localhost.localdomain microshift[132400]: kubelet I0213 04:21:21.372108 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:21 localhost.localdomain microshift[132400]: kubelet I0213 04:21:21.372142 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:22 localhost.localdomain microshift[132400]: kubelet I0213 04:21:22.665122 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:21:22 localhost.localdomain microshift[132400]: kubelet E0213 04:21:22.665867 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:21:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:23.286410 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:24 localhost.localdomain microshift[132400]: kubelet I0213 04:21:24.372499 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:24 localhost.localdomain microshift[132400]: kubelet I0213 04:21:24.372852 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:24 localhost.localdomain microshift[132400]: kubelet I0213 04:21:24.632270 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:24 localhost.localdomain microshift[132400]: kubelet I0213 04:21:24.632532 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:24 localhost.localdomain microshift[132400]: kubelet I0213 04:21:24.632593 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:21:24 localhost.localdomain microshift[132400]: kubelet I0213 04:21:24.633003 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:a58c152147c0592ed31b0aec1e3ffed38a6cb198c752d10a4553e48c1dbd4e83} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:21:24 localhost.localdomain microshift[132400]: kubelet I0213 04:21:24.633186 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://a58c152147c0592ed31b0aec1e3ffed38a6cb198c752d10a4553e48c1dbd4e83" gracePeriod=30 Feb 13 04:21:25 localhost.localdomain microshift[132400]: kubelet I0213 04:21:25.669549 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:21:25 localhost.localdomain microshift[132400]: kubelet E0213 04:21:25.669898 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:21:27 localhost.localdomain microshift[132400]: kubelet I0213 04:21:27.373954 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:27 localhost.localdomain microshift[132400]: kubelet I0213 04:21:27.374029 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:28.287003 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:29 localhost.localdomain microshift[132400]: kubelet I0213 04:21:29.664033 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:21:29 localhost.localdomain microshift[132400]: kubelet E0213 04:21:29.664339 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:21:30 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:21:30.319206 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:21:30 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:21:30.319413 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:21:30 localhost.localdomain microshift[132400]: kubelet I0213 04:21:30.375033 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:30 localhost.localdomain microshift[132400]: kubelet I0213 04:21:30.375089 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:33.286487 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:33 localhost.localdomain microshift[132400]: kubelet I0213 04:21:33.375911 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:33 localhost.localdomain microshift[132400]: kubelet I0213 04:21:33.376190 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:34 localhost.localdomain microshift[132400]: kubelet I0213 04:21:34.665728 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:21:34 localhost.localdomain microshift[132400]: kubelet E0213 04:21:34.667981 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:21:36 localhost.localdomain microshift[132400]: kubelet I0213 04:21:36.376841 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:36 localhost.localdomain microshift[132400]: kubelet I0213 04:21:36.376890 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:36 localhost.localdomain microshift[132400]: kubelet I0213 04:21:36.664962 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:21:36 localhost.localdomain microshift[132400]: kubelet E0213 04:21:36.665106 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:21:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:38.286686 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:39 localhost.localdomain microshift[132400]: kubelet I0213 04:21:39.377882 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:39 localhost.localdomain microshift[132400]: kubelet I0213 04:21:39.378256 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:40 localhost.localdomain microshift[132400]: kubelet I0213 04:21:40.663541 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:21:40 localhost.localdomain microshift[132400]: kubelet E0213 04:21:40.663897 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:21:42 localhost.localdomain microshift[132400]: kubelet I0213 04:21:42.378869 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:42 localhost.localdomain microshift[132400]: kubelet I0213 04:21:42.378920 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:43.286800 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:45 localhost.localdomain microshift[132400]: kubelet I0213 04:21:45.379830 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:45 localhost.localdomain microshift[132400]: kubelet I0213 04:21:45.379879 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:45 localhost.localdomain microshift[132400]: kubelet I0213 04:21:45.485248 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="a58c152147c0592ed31b0aec1e3ffed38a6cb198c752d10a4553e48c1dbd4e83" exitCode=0 Feb 13 04:21:45 localhost.localdomain microshift[132400]: kubelet I0213 04:21:45.485278 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:a58c152147c0592ed31b0aec1e3ffed38a6cb198c752d10a4553e48c1dbd4e83} Feb 13 04:21:45 localhost.localdomain microshift[132400]: kubelet I0213 04:21:45.485292 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9} Feb 13 04:21:45 localhost.localdomain microshift[132400]: kubelet I0213 04:21:45.485305 132400 scope.go:115] "RemoveContainer" containerID="75c5b29bd84afe135720eb1f7645213c7754fcf066cd4a10642fd4b7c5a3cf6d" Feb 13 04:21:46 localhost.localdomain microshift[132400]: kubelet I0213 04:21:46.487153 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 04:21:46 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:21:46.785221 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:21:46 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:21:46.785414 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:21:47 localhost.localdomain microshift[132400]: kubelet I0213 04:21:47.664101 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:21:47 localhost.localdomain microshift[132400]: kubelet E0213 04:21:47.664978 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:21:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:48.286402 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:48 localhost.localdomain microshift[132400]: kubelet I0213 04:21:48.380783 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:48 localhost.localdomain microshift[132400]: kubelet I0213 04:21:48.380832 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:48 localhost.localdomain microshift[132400]: kubelet I0213 04:21:48.380869 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:21:50 localhost.localdomain microshift[132400]: kubelet I0213 04:21:50.663463 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:21:50 localhost.localdomain microshift[132400]: kubelet E0213 04:21:50.664054 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:21:52 localhost.localdomain microshift[132400]: kubelet I0213 04:21:52.664026 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:21:52 localhost.localdomain microshift[132400]: kubelet E0213 04:21:52.664862 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:21:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:53.287130 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:21:57 localhost.localdomain microshift[132400]: kubelet I0213 04:21:57.346049 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:21:57 localhost.localdomain microshift[132400]: kubelet I0213 04:21:57.346081 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:21:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:21:58.286623 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:00 localhost.localdomain microshift[132400]: kubelet I0213 04:22:00.346379 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:00 localhost.localdomain microshift[132400]: kubelet I0213 04:22:00.346422 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:01 localhost.localdomain microshift[132400]: kubelet I0213 04:22:01.664401 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:22:01 localhost.localdomain microshift[132400]: kubelet E0213 04:22:01.665021 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:22:02 localhost.localdomain microshift[132400]: kubelet I0213 04:22:02.663879 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:22:02 localhost.localdomain microshift[132400]: kubelet E0213 04:22:02.664168 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:22:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:03.286971 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:03 localhost.localdomain microshift[132400]: kubelet I0213 04:22:03.347069 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:03 localhost.localdomain microshift[132400]: kubelet I0213 04:22:03.347145 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:06 localhost.localdomain microshift[132400]: kubelet I0213 04:22:06.347266 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:06 localhost.localdomain microshift[132400]: kubelet I0213 04:22:06.347310 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:06 localhost.localdomain microshift[132400]: kubelet I0213 04:22:06.665215 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:22:06 localhost.localdomain microshift[132400]: kubelet E0213 04:22:06.665493 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:22:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:08.286550 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:09 localhost.localdomain microshift[132400]: kubelet I0213 04:22:09.347686 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:09 localhost.localdomain microshift[132400]: kubelet I0213 04:22:09.347736 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:12 localhost.localdomain microshift[132400]: kubelet I0213 04:22:12.348784 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:12 localhost.localdomain microshift[132400]: kubelet I0213 04:22:12.348830 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:13 localhost.localdomain microshift[132400]: kubelet I0213 04:22:13.229955 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:22:13 localhost.localdomain microshift[132400]: kubelet E0213 04:22:13.230179 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:24:15.230168759 -0500 EST m=+1142.410515041 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:22:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:13.286237 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:13 localhost.localdomain microshift[132400]: kubelet I0213 04:22:13.664181 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:22:13 localhost.localdomain microshift[132400]: kubelet E0213 04:22:13.664734 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:22:15 localhost.localdomain microshift[132400]: kubelet I0213 04:22:15.349656 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:15 localhost.localdomain microshift[132400]: kubelet I0213 04:22:15.350035 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:16 localhost.localdomain microshift[132400]: kubelet E0213 04:22:16.348417 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:22:16 localhost.localdomain microshift[132400]: kubelet E0213 04:22:16.348437 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:22:16 localhost.localdomain microshift[132400]: kubelet I0213 04:22:16.664962 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:22:17 localhost.localdomain microshift[132400]: kubelet I0213 04:22:17.535229 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8} Feb 13 04:22:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:18.286311 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:18 localhost.localdomain microshift[132400]: kubelet I0213 04:22:18.350823 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:18 localhost.localdomain microshift[132400]: kubelet I0213 04:22:18.350862 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:20 localhost.localdomain microshift[132400]: kubelet I0213 04:22:20.541562 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" exitCode=1 Feb 13 04:22:20 localhost.localdomain microshift[132400]: kubelet I0213 04:22:20.541953 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8} Feb 13 04:22:20 localhost.localdomain microshift[132400]: kubelet I0213 04:22:20.542052 132400 scope.go:115] "RemoveContainer" containerID="04ed43e00e3e9eee47eaa2df3d89e5489ba8be53f86fa7bf965223f1dffad98c" Feb 13 04:22:20 localhost.localdomain microshift[132400]: kubelet I0213 04:22:20.542623 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:22:20 localhost.localdomain microshift[132400]: kubelet E0213 04:22:20.543046 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:22:20 localhost.localdomain microshift[132400]: kubelet I0213 04:22:20.901904 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:22:21 localhost.localdomain microshift[132400]: kubelet I0213 04:22:21.351158 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:21 localhost.localdomain microshift[132400]: kubelet I0213 04:22:21.351204 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:21 localhost.localdomain microshift[132400]: kubelet I0213 04:22:21.546114 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:22:21 localhost.localdomain microshift[132400]: kubelet E0213 04:22:21.547046 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:22:21 localhost.localdomain microshift[132400]: kubelet I0213 04:22:21.664171 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:22:21 localhost.localdomain microshift[132400]: kubelet E0213 04:22:21.664722 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:22:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:23.286346 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:24 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:22:24.148592 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:22:24 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:22:24.148847 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:22:24 localhost.localdomain microshift[132400]: kubelet I0213 04:22:24.352054 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:24 localhost.localdomain microshift[132400]: kubelet I0213 04:22:24.352102 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:26 localhost.localdomain microshift[132400]: kubelet I0213 04:22:26.663315 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:22:26 localhost.localdomain microshift[132400]: kubelet E0213 04:22:26.663965 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:22:27 localhost.localdomain microshift[132400]: kubelet I0213 04:22:27.352922 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:27 localhost.localdomain microshift[132400]: kubelet I0213 04:22:27.352961 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:28.287047 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:30 localhost.localdomain microshift[132400]: kubelet I0213 04:22:30.353811 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:30 localhost.localdomain microshift[132400]: kubelet I0213 04:22:30.353846 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:33.286926 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:33 localhost.localdomain microshift[132400]: kubelet I0213 04:22:33.354392 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:33 localhost.localdomain microshift[132400]: kubelet I0213 04:22:33.354643 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:33 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:22:33.585314 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:22:33 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:22:33.585760 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:22:33 localhost.localdomain microshift[132400]: kubelet I0213 04:22:33.663997 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:22:33 localhost.localdomain microshift[132400]: kubelet E0213 04:22:33.664406 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:22:34 localhost.localdomain microshift[132400]: kubelet I0213 04:22:34.663772 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:22:35 localhost.localdomain microshift[132400]: kubelet I0213 04:22:35.570633 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703} Feb 13 04:22:35 localhost.localdomain microshift[132400]: kubelet I0213 04:22:35.571519 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:22:36 localhost.localdomain microshift[132400]: kubelet I0213 04:22:36.355584 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:36 localhost.localdomain microshift[132400]: kubelet I0213 04:22:36.355663 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:36 localhost.localdomain microshift[132400]: kubelet I0213 04:22:36.571534 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:36 localhost.localdomain microshift[132400]: kubelet I0213 04:22:36.572212 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:37 localhost.localdomain microshift[132400]: kubelet I0213 04:22:37.573674 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:37 localhost.localdomain microshift[132400]: kubelet I0213 04:22:37.574084 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:38.286231 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:38 localhost.localdomain microshift[132400]: kubelet I0213 04:22:38.575068 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": dial tcp 10.42.0.6:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:38 localhost.localdomain microshift[132400]: kubelet I0213 04:22:38.575357 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": dial tcp 10.42.0.6:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:38 localhost.localdomain microshift[132400]: kubelet I0213 04:22:38.577064 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" exitCode=1 Feb 13 04:22:38 localhost.localdomain microshift[132400]: kubelet I0213 04:22:38.577102 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703} Feb 13 04:22:38 localhost.localdomain microshift[132400]: kubelet I0213 04:22:38.577126 132400 scope.go:115] "RemoveContainer" containerID="1c324fe60ab102af281b8b600f89240cec9191b06935ee665d309a67af26116d" Feb 13 04:22:38 localhost.localdomain microshift[132400]: kubelet I0213 04:22:38.577392 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:22:38 localhost.localdomain microshift[132400]: kubelet E0213 04:22:38.577711 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:22:39 localhost.localdomain microshift[132400]: kubelet I0213 04:22:39.356338 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:39 localhost.localdomain microshift[132400]: kubelet I0213 04:22:39.356393 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:40 localhost.localdomain microshift[132400]: kubelet I0213 04:22:40.664698 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:22:40 localhost.localdomain microshift[132400]: kubelet E0213 04:22:40.664851 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:22:42 localhost.localdomain microshift[132400]: kubelet I0213 04:22:42.357473 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:42 localhost.localdomain microshift[132400]: kubelet I0213 04:22:42.357949 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:43.286610 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:45 localhost.localdomain microshift[132400]: kubelet I0213 04:22:45.358880 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:45 localhost.localdomain microshift[132400]: kubelet I0213 04:22:45.359260 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:46 localhost.localdomain microshift[132400]: kubelet I0213 04:22:46.665127 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:22:46 localhost.localdomain microshift[132400]: kubelet E0213 04:22:46.665390 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:22:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:48.286833 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:48 localhost.localdomain microshift[132400]: kubelet I0213 04:22:48.359435 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:48 localhost.localdomain microshift[132400]: kubelet I0213 04:22:48.359497 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:51 localhost.localdomain microshift[132400]: kubelet I0213 04:22:51.359903 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:51 localhost.localdomain microshift[132400]: kubelet I0213 04:22:51.359968 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:53.286865 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:22:53 localhost.localdomain microshift[132400]: kubelet I0213 04:22:53.663966 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:22:53 localhost.localdomain microshift[132400]: kubelet E0213 04:22:53.664428 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:22:54 localhost.localdomain microshift[132400]: kubelet I0213 04:22:54.360806 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:54 localhost.localdomain microshift[132400]: kubelet I0213 04:22:54.360848 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:54 localhost.localdomain microshift[132400]: kubelet I0213 04:22:54.632051 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:54 localhost.localdomain microshift[132400]: kubelet I0213 04:22:54.632291 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:54 localhost.localdomain microshift[132400]: kubelet I0213 04:22:54.663925 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:22:54 localhost.localdomain microshift[132400]: kubelet E0213 04:22:54.664406 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:22:55 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:22:55.109649 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:22:55 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:22:55.109674 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:22:57 localhost.localdomain microshift[132400]: kubelet I0213 04:22:57.361716 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:22:57 localhost.localdomain microshift[132400]: kubelet I0213 04:22:57.361755 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:22:57 localhost.localdomain microshift[132400]: kubelet I0213 04:22:57.663758 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:22:57 localhost.localdomain microshift[132400]: kubelet E0213 04:22:57.664124 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:22:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:22:58.286707 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:00 localhost.localdomain microshift[132400]: kubelet I0213 04:23:00.362116 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:00 localhost.localdomain microshift[132400]: kubelet I0213 04:23:00.362685 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:03.287374 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:03 localhost.localdomain microshift[132400]: kubelet I0213 04:23:03.363013 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:03 localhost.localdomain microshift[132400]: kubelet I0213 04:23:03.363175 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:03 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:23:03.746140 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:23:03 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:23:03.746423 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:23:04 localhost.localdomain microshift[132400]: kubelet I0213 04:23:04.632475 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:04 localhost.localdomain microshift[132400]: kubelet I0213 04:23:04.632864 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:06 localhost.localdomain microshift[132400]: kubelet I0213 04:23:06.363972 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:06 localhost.localdomain microshift[132400]: kubelet I0213 04:23:06.364011 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:06 localhost.localdomain microshift[132400]: kubelet I0213 04:23:06.664289 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:23:06 localhost.localdomain microshift[132400]: kubelet E0213 04:23:06.664583 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:23:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:08.287032 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:08 localhost.localdomain microshift[132400]: kubelet I0213 04:23:08.663779 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:23:08 localhost.localdomain microshift[132400]: kubelet E0213 04:23:08.664362 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:23:08 localhost.localdomain microshift[132400]: kubelet I0213 04:23:08.664479 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:23:08 localhost.localdomain microshift[132400]: kubelet E0213 04:23:08.664758 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:23:09 localhost.localdomain microshift[132400]: kubelet I0213 04:23:09.364412 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:09 localhost.localdomain microshift[132400]: kubelet I0213 04:23:09.364789 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:12 localhost.localdomain microshift[132400]: kubelet I0213 04:23:12.365508 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:12 localhost.localdomain microshift[132400]: kubelet I0213 04:23:12.365557 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:13.286324 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:14 localhost.localdomain microshift[132400]: kubelet I0213 04:23:14.631584 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:14 localhost.localdomain microshift[132400]: kubelet I0213 04:23:14.631891 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:15 localhost.localdomain microshift[132400]: kubelet I0213 04:23:15.366404 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:15 localhost.localdomain microshift[132400]: kubelet I0213 04:23:15.366637 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:18.286827 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:18 localhost.localdomain microshift[132400]: kubelet I0213 04:23:18.367170 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:18 localhost.localdomain microshift[132400]: kubelet I0213 04:23:18.367437 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:20 localhost.localdomain microshift[132400]: kubelet I0213 04:23:20.664742 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:23:20 localhost.localdomain microshift[132400]: kubelet E0213 04:23:20.665749 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:23:21 localhost.localdomain microshift[132400]: kubelet I0213 04:23:21.368274 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:21 localhost.localdomain microshift[132400]: kubelet I0213 04:23:21.368325 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:21 localhost.localdomain microshift[132400]: kubelet I0213 04:23:21.663694 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:23:21 localhost.localdomain microshift[132400]: kubelet E0213 04:23:21.664224 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:23:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:23.287151 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:23 localhost.localdomain microshift[132400]: kubelet I0213 04:23:23.663283 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:23:23 localhost.localdomain microshift[132400]: kubelet E0213 04:23:23.663471 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:23:24 localhost.localdomain microshift[132400]: kubelet I0213 04:23:24.369377 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:24 localhost.localdomain microshift[132400]: kubelet I0213 04:23:24.369424 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:24 localhost.localdomain microshift[132400]: kubelet I0213 04:23:24.631724 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:24 localhost.localdomain microshift[132400]: kubelet I0213 04:23:24.631995 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:26 localhost.localdomain microshift[132400]: kubelet I0213 04:23:26.192467 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:23:26 localhost.localdomain microshift[132400]: kubelet I0213 04:23:26.192811 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:23:26 localhost.localdomain microshift[132400]: kubelet E0213 04:23:26.193114 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:23:27 localhost.localdomain microshift[132400]: kubelet I0213 04:23:27.370322 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:27 localhost.localdomain microshift[132400]: kubelet I0213 04:23:27.370354 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:28.286621 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:30 localhost.localdomain microshift[132400]: kubelet I0213 04:23:30.371137 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:30 localhost.localdomain microshift[132400]: kubelet I0213 04:23:30.371525 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:32 localhost.localdomain microshift[132400]: kubelet I0213 04:23:32.664122 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:23:32 localhost.localdomain microshift[132400]: kubelet E0213 04:23:32.664399 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:23:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:33.287021 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:33 localhost.localdomain microshift[132400]: kubelet I0213 04:23:33.372193 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:33 localhost.localdomain microshift[132400]: kubelet I0213 04:23:33.372243 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:34 localhost.localdomain microshift[132400]: kubelet I0213 04:23:34.632376 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:34 localhost.localdomain microshift[132400]: kubelet I0213 04:23:34.632770 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:34 localhost.localdomain microshift[132400]: kubelet I0213 04:23:34.632831 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:23:34 localhost.localdomain microshift[132400]: kubelet I0213 04:23:34.633232 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:23:34 localhost.localdomain microshift[132400]: kubelet I0213 04:23:34.633387 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" gracePeriod=30 Feb 13 04:23:36 localhost.localdomain microshift[132400]: kubelet I0213 04:23:36.373294 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:36 localhost.localdomain microshift[132400]: kubelet I0213 04:23:36.373332 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:37 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:23:37.511560 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:23:37 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:23:37.511928 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:23:37 localhost.localdomain microshift[132400]: kubelet I0213 04:23:37.663232 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:23:37 localhost.localdomain microshift[132400]: kubelet E0213 04:23:37.663550 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:23:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:38.287045 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:38 localhost.localdomain microshift[132400]: kubelet I0213 04:23:38.664270 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:23:38 localhost.localdomain microshift[132400]: kubelet E0213 04:23:38.664593 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:23:39 localhost.localdomain microshift[132400]: kubelet I0213 04:23:39.373455 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:39 localhost.localdomain microshift[132400]: kubelet I0213 04:23:39.373529 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:40 localhost.localdomain microshift[132400]: kubelet I0213 04:23:40.461739 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/dns/9.log" Feb 13 04:23:40 localhost.localdomain microshift[132400]: kubelet I0213 04:23:40.464293 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/kube-rbac-proxy/3.log" Feb 13 04:23:42 localhost.localdomain microshift[132400]: kubelet I0213 04:23:42.373800 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:42 localhost.localdomain microshift[132400]: kubelet I0213 04:23:42.373859 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:43.286438 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:45 localhost.localdomain microshift[132400]: kubelet I0213 04:23:45.374498 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:45 localhost.localdomain microshift[132400]: kubelet I0213 04:23:45.374547 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:47 localhost.localdomain microshift[132400]: kubelet I0213 04:23:47.664058 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:23:47 localhost.localdomain microshift[132400]: kubelet E0213 04:23:47.664481 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:23:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:48.286881 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:48 localhost.localdomain microshift[132400]: kubelet I0213 04:23:48.375197 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:48 localhost.localdomain microshift[132400]: kubelet I0213 04:23:48.375261 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:50 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:23:50.906995 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:23:50 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:23:50.907018 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:23:51 localhost.localdomain microshift[132400]: kubelet I0213 04:23:51.376124 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:51 localhost.localdomain microshift[132400]: kubelet I0213 04:23:51.376169 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:52 localhost.localdomain microshift[132400]: kubelet I0213 04:23:52.664933 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:23:52 localhost.localdomain microshift[132400]: kubelet E0213 04:23:52.665119 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:23:52 localhost.localdomain microshift[132400]: kubelet I0213 04:23:52.665454 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:23:52 localhost.localdomain microshift[132400]: kubelet E0213 04:23:52.666085 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:23:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:53.286887 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:54 localhost.localdomain microshift[132400]: kubelet I0213 04:23:54.376351 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:54 localhost.localdomain microshift[132400]: kubelet I0213 04:23:54.376397 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:54 localhost.localdomain microshift[132400]: kubelet E0213 04:23:54.738224 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:23:55 localhost.localdomain microshift[132400]: kubelet I0213 04:23:55.693164 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" exitCode=0 Feb 13 04:23:55 localhost.localdomain microshift[132400]: kubelet I0213 04:23:55.693191 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9} Feb 13 04:23:55 localhost.localdomain microshift[132400]: kubelet I0213 04:23:55.693211 132400 scope.go:115] "RemoveContainer" containerID="a58c152147c0592ed31b0aec1e3ffed38a6cb198c752d10a4553e48c1dbd4e83" Feb 13 04:23:55 localhost.localdomain microshift[132400]: kubelet I0213 04:23:55.693430 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:23:55 localhost.localdomain microshift[132400]: kubelet E0213 04:23:55.693731 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:23:57 localhost.localdomain microshift[132400]: kubelet I0213 04:23:57.377165 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:23:57 localhost.localdomain microshift[132400]: kubelet I0213 04:23:57.377465 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:23:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:23:58.286721 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:23:58 localhost.localdomain microshift[132400]: kubelet I0213 04:23:58.664510 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:23:58 localhost.localdomain microshift[132400]: kubelet E0213 04:23:58.665228 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:24:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:03.286815 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:05 localhost.localdomain microshift[132400]: kubelet I0213 04:24:05.663482 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:24:05 localhost.localdomain microshift[132400]: kubelet E0213 04:24:05.663825 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:24:07 localhost.localdomain microshift[132400]: kubelet I0213 04:24:07.663722 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:24:07 localhost.localdomain microshift[132400]: kubelet E0213 04:24:07.664024 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:24:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:08.286484 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:08 localhost.localdomain microshift[132400]: kubelet I0213 04:24:08.664229 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:24:08 localhost.localdomain microshift[132400]: kubelet E0213 04:24:08.664804 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:24:10 localhost.localdomain microshift[132400]: kubelet I0213 04:24:10.663808 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:24:10 localhost.localdomain microshift[132400]: kubelet E0213 04:24:10.664434 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:24:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:13.286898 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:15 localhost.localdomain microshift[132400]: kubelet I0213 04:24:15.233849 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:24:15 localhost.localdomain microshift[132400]: kubelet E0213 04:24:15.233958 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:26:17.233948255 -0500 EST m=+1264.414294523 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:24:17 localhost.localdomain microshift[132400]: kubelet I0213 04:24:17.663417 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:24:17 localhost.localdomain microshift[132400]: kubelet E0213 04:24:17.664200 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:24:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:18.286996 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:19 localhost.localdomain microshift[132400]: kubelet E0213 04:24:19.533520 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:24:19 localhost.localdomain microshift[132400]: kubelet E0213 04:24:19.533550 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:24:19 localhost.localdomain microshift[132400]: kubelet I0213 04:24:19.664300 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:24:19 localhost.localdomain microshift[132400]: kubelet E0213 04:24:19.664667 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:24:19 localhost.localdomain microshift[132400]: kubelet I0213 04:24:19.664778 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:24:19 localhost.localdomain microshift[132400]: kubelet E0213 04:24:19.665004 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:24:21 localhost.localdomain microshift[132400]: kubelet I0213 04:24:21.664339 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:24:21 localhost.localdomain microshift[132400]: kubelet E0213 04:24:21.665163 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:24:21 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:24:21.800684 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:24:21 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:24:21.800706 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:24:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:23.287046 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:24 localhost.localdomain microshift[132400]: kubelet I0213 04:24:24.917760 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/dns/9.log" Feb 13 04:24:24 localhost.localdomain microshift[132400]: kubelet I0213 04:24:24.919964 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/kube-rbac-proxy/3.log" Feb 13 04:24:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:28.286388 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:31 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:24:31.014221 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:24:31 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:24:31.014494 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:24:31 localhost.localdomain microshift[132400]: kubelet I0213 04:24:31.549564 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 04:24:32 localhost.localdomain microshift[132400]: kubelet I0213 04:24:32.663621 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:24:32 localhost.localdomain microshift[132400]: kubelet E0213 04:24:32.663793 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:24:32 localhost.localdomain microshift[132400]: kubelet I0213 04:24:32.664119 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:24:32 localhost.localdomain microshift[132400]: kubelet E0213 04:24:32.664374 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:24:33 localhost.localdomain microshift[132400]: kubelet I0213 04:24:33.234539 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 04:24:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:33.287187 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:33 localhost.localdomain microshift[132400]: kubelet I0213 04:24:33.664044 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:24:33 localhost.localdomain microshift[132400]: kubelet I0213 04:24:33.664672 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:24:33 localhost.localdomain microshift[132400]: kubelet E0213 04:24:33.664840 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:24:33 localhost.localdomain microshift[132400]: kubelet E0213 04:24:33.664919 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:24:35 localhost.localdomain microshift[132400]: kubelet I0213 04:24:35.779713 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 04:24:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:38.286864 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:43.286276 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:44 localhost.localdomain microshift[132400]: kubelet I0213 04:24:44.664420 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:24:44 localhost.localdomain microshift[132400]: kubelet E0213 04:24:44.664685 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:24:44 localhost.localdomain microshift[132400]: kubelet I0213 04:24:44.664905 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:24:44 localhost.localdomain microshift[132400]: kubelet E0213 04:24:44.665017 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:24:46 localhost.localdomain microshift[132400]: kubelet I0213 04:24:46.159480 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 04:24:46 localhost.localdomain microshift[132400]: kubelet I0213 04:24:46.666220 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:24:46 localhost.localdomain microshift[132400]: kubelet I0213 04:24:46.666416 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:24:46 localhost.localdomain microshift[132400]: kubelet E0213 04:24:46.666718 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:24:46 localhost.localdomain microshift[132400]: kubelet E0213 04:24:46.667163 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:24:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:48.286710 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:53.287144 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:57 localhost.localdomain microshift[132400]: kubelet I0213 04:24:57.663545 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:24:57 localhost.localdomain microshift[132400]: kubelet E0213 04:24:57.663877 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:24:57 localhost.localdomain microshift[132400]: kubelet I0213 04:24:57.664104 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:24:57 localhost.localdomain microshift[132400]: kubelet E0213 04:24:57.664407 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:24:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:24:58.286234 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:24:58 localhost.localdomain microshift[132400]: kubelet I0213 04:24:58.664445 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:24:58 localhost.localdomain microshift[132400]: kubelet E0213 04:24:58.664736 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:24:59 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:24:59.546541 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:24:59 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:24:59.546764 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:24:59 localhost.localdomain microshift[132400]: kubelet I0213 04:24:59.663790 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:24:59 localhost.localdomain microshift[132400]: kubelet E0213 04:24:59.664167 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:25:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:03.287200 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:25:04 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:25:04.123633 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:25:04 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:25:04.123654 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:25:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:08.286223 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:25:10 localhost.localdomain microshift[132400]: kubelet I0213 04:25:10.664428 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:25:10 localhost.localdomain microshift[132400]: kubelet E0213 04:25:10.665246 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:25:12 localhost.localdomain microshift[132400]: kubelet I0213 04:25:12.664237 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:25:12 localhost.localdomain microshift[132400]: kubelet E0213 04:25:12.664888 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:25:12 localhost.localdomain microshift[132400]: kubelet I0213 04:25:12.665358 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:25:12 localhost.localdomain microshift[132400]: kubelet E0213 04:25:12.665925 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:25:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:13.286780 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:25:13 localhost.localdomain microshift[132400]: kubelet I0213 04:25:13.663503 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:25:13 localhost.localdomain microshift[132400]: kubelet E0213 04:25:13.663787 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:25:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:18.286250 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:25:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:23.286915 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:25:23 localhost.localdomain microshift[132400]: kubelet I0213 04:25:23.663911 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:25:23 localhost.localdomain microshift[132400]: kubelet E0213 04:25:23.664187 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:25:24 localhost.localdomain microshift[132400]: kubelet I0213 04:25:24.664342 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:25:24 localhost.localdomain microshift[132400]: kubelet E0213 04:25:24.665304 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:25:25 localhost.localdomain microshift[132400]: kubelet I0213 04:25:25.669146 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:25:25 localhost.localdomain microshift[132400]: kubelet E0213 04:25:25.669522 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:25:27 localhost.localdomain microshift[132400]: kubelet I0213 04:25:27.663982 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:25:27 localhost.localdomain microshift[132400]: kubelet E0213 04:25:27.664549 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:25:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:28.286505 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:25:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:33.286541 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:25:34 localhost.localdomain microshift[132400]: kubelet I0213 04:25:34.664329 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:25:34 localhost.localdomain microshift[132400]: kubelet E0213 04:25:34.664605 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:25:36 localhost.localdomain microshift[132400]: kubelet I0213 04:25:36.665021 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:25:36 localhost.localdomain microshift[132400]: kubelet E0213 04:25:36.665180 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:25:36 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:25:36.689492 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:25:36 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:25:36.689514 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:25:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:38.286220 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:25:38 localhost.localdomain microshift[132400]: kubelet I0213 04:25:38.663450 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:25:38 localhost.localdomain microshift[132400]: kubelet E0213 04:25:38.663959 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:25:39 localhost.localdomain microshift[132400]: kubelet I0213 04:25:39.663767 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:25:39 localhost.localdomain microshift[132400]: kubelet E0213 04:25:39.664088 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:25:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:43.286460 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:25:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:48.286967 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:25:49 localhost.localdomain microshift[132400]: kubelet I0213 04:25:49.664224 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:25:49 localhost.localdomain microshift[132400]: kubelet E0213 04:25:49.664754 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:25:50 localhost.localdomain microshift[132400]: kubelet I0213 04:25:50.664114 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:25:50 localhost.localdomain microshift[132400]: kubelet E0213 04:25:50.664398 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:25:51 localhost.localdomain microshift[132400]: kubelet I0213 04:25:51.663235 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:25:51 localhost.localdomain microshift[132400]: kubelet E0213 04:25:51.663541 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:25:52 localhost.localdomain microshift[132400]: kubelet I0213 04:25:52.664837 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:25:52 localhost.localdomain microshift[132400]: kubelet E0213 04:25:52.665149 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:25:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:53.286836 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:25:54 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:25:54.560279 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:25:54 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:25:54.560575 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:25:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:25:58.286798 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:01 localhost.localdomain microshift[132400]: kubelet I0213 04:26:01.664168 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:26:01 localhost.localdomain microshift[132400]: kubelet E0213 04:26:01.664746 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:26:02 localhost.localdomain microshift[132400]: kubelet I0213 04:26:02.663507 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:26:02 localhost.localdomain microshift[132400]: kubelet I0213 04:26:02.891912 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82} Feb 13 04:26:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:03.286836 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:05 localhost.localdomain microshift[132400]: kubelet I0213 04:26:05.666361 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:26:05 localhost.localdomain microshift[132400]: kubelet E0213 04:26:05.666861 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:26:05 localhost.localdomain microshift[132400]: kubelet I0213 04:26:05.667237 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:26:05 localhost.localdomain microshift[132400]: kubelet E0213 04:26:05.667426 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:26:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:08.286545 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:09 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:26:09.819883 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:26:09 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:26:09.819908 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:26:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:13.287467 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:15 localhost.localdomain microshift[132400]: kubelet I0213 04:26:15.668021 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:26:15 localhost.localdomain microshift[132400]: kubelet E0213 04:26:15.668284 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:26:17 localhost.localdomain microshift[132400]: kubelet I0213 04:26:17.234289 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:26:17 localhost.localdomain microshift[132400]: kubelet E0213 04:26:17.234404 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:28:19.234394217 -0500 EST m=+1386.414740485 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:26:17 localhost.localdomain microshift[132400]: kubelet I0213 04:26:17.663439 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:26:17 localhost.localdomain microshift[132400]: kubelet E0213 04:26:17.663733 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:26:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:18.287044 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:18 localhost.localdomain microshift[132400]: kubelet I0213 04:26:18.663781 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:26:18 localhost.localdomain microshift[132400]: kubelet E0213 04:26:18.664084 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:26:22 localhost.localdomain microshift[132400]: kubelet E0213 04:26:22.732981 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[kube-api-access-5gtpr default-certificate service-ca-bundle]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:26:22 localhost.localdomain microshift[132400]: kubelet E0213 04:26:22.733015 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[kube-api-access-5gtpr default-certificate service-ca-bundle]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:26:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:23.286311 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:28.287207 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:28 localhost.localdomain microshift[132400]: kubelet I0213 04:26:28.665287 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:26:28 localhost.localdomain microshift[132400]: kubelet E0213 04:26:28.666807 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:26:28 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:26:28.698212 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:26:28 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:26:28.698238 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:26:29 localhost.localdomain microshift[132400]: kubelet I0213 04:26:29.663273 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:26:29 localhost.localdomain microshift[132400]: kubelet E0213 04:26:29.663538 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:26:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:33.286269 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:33 localhost.localdomain microshift[132400]: kubelet I0213 04:26:33.663809 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:26:33 localhost.localdomain microshift[132400]: kubelet E0213 04:26:33.664118 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:26:36 localhost.localdomain microshift[132400]: kubelet I0213 04:26:36.945181 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" exitCode=255 Feb 13 04:26:36 localhost.localdomain microshift[132400]: kubelet I0213 04:26:36.945256 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82} Feb 13 04:26:36 localhost.localdomain microshift[132400]: kubelet I0213 04:26:36.945353 132400 scope.go:115] "RemoveContainer" containerID="6903b90c2d3125f6e862622743aeebcbf9fcbe25cf854781a1977c99c539f954" Feb 13 04:26:36 localhost.localdomain microshift[132400]: kubelet I0213 04:26:36.945569 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:26:36 localhost.localdomain microshift[132400]: kubelet E0213 04:26:36.945712 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:26:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:38.286543 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:42 localhost.localdomain microshift[132400]: kubelet I0213 04:26:42.663788 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:26:42 localhost.localdomain microshift[132400]: kubelet E0213 04:26:42.664301 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:26:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:43.287070 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:43 localhost.localdomain microshift[132400]: kubelet I0213 04:26:43.663507 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:26:43 localhost.localdomain microshift[132400]: kubelet E0213 04:26:43.663951 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:26:47 localhost.localdomain microshift[132400]: kubelet I0213 04:26:47.663610 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:26:47 localhost.localdomain microshift[132400]: kubelet E0213 04:26:47.663927 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:26:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:48.286626 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:48 localhost.localdomain microshift[132400]: kubelet I0213 04:26:48.664062 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:26:48 localhost.localdomain microshift[132400]: kubelet E0213 04:26:48.664224 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:26:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:53.287400 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:26:53 localhost.localdomain microshift[132400]: kubelet I0213 04:26:53.664042 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:26:53 localhost.localdomain microshift[132400]: kubelet E0213 04:26:53.664387 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:26:57 localhost.localdomain microshift[132400]: kubelet I0213 04:26:57.663740 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:26:57 localhost.localdomain microshift[132400]: kubelet E0213 04:26:57.664240 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:26:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:26:58.286527 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:01 localhost.localdomain microshift[132400]: kubelet I0213 04:27:01.663778 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:27:01 localhost.localdomain microshift[132400]: kubelet E0213 04:27:01.664214 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:27:01 localhost.localdomain microshift[132400]: kubelet I0213 04:27:01.664521 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:27:01 localhost.localdomain microshift[132400]: kubelet E0213 04:27:01.664876 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:27:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:03.286760 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:07 localhost.localdomain microshift[132400]: kubelet I0213 04:27:07.664424 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:27:07 localhost.localdomain microshift[132400]: kubelet E0213 04:27:07.665067 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:27:07 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:27:07.915231 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:27:07 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:27:07.915515 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:27:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:08.287315 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:08 localhost.localdomain microshift[132400]: kubelet I0213 04:27:08.663919 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:27:08 localhost.localdomain microshift[132400]: kubelet E0213 04:27:08.664297 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:27:12 localhost.localdomain microshift[132400]: kubelet I0213 04:27:12.663937 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:27:12 localhost.localdomain microshift[132400]: kubelet E0213 04:27:12.664476 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:27:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:13.286693 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:14 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:27:14.709300 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:27:14 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:27:14.709554 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:27:16 localhost.localdomain microshift[132400]: kubelet I0213 04:27:16.667783 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:27:16 localhost.localdomain microshift[132400]: kubelet E0213 04:27:16.668907 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:27:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:18.286241 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:19 localhost.localdomain microshift[132400]: kubelet I0213 04:27:19.663359 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:27:19 localhost.localdomain microshift[132400]: kubelet E0213 04:27:19.663920 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:27:21 localhost.localdomain microshift[132400]: kubelet I0213 04:27:21.664347 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:27:22 localhost.localdomain microshift[132400]: kubelet I0213 04:27:22.019015 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba} Feb 13 04:27:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:23.286995 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:23 localhost.localdomain microshift[132400]: kubelet I0213 04:27:23.663651 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:27:23 localhost.localdomain microshift[132400]: kubelet E0213 04:27:23.664012 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:27:25 localhost.localdomain microshift[132400]: kubelet I0213 04:27:25.024821 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" exitCode=1 Feb 13 04:27:25 localhost.localdomain microshift[132400]: kubelet I0213 04:27:25.024852 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba} Feb 13 04:27:25 localhost.localdomain microshift[132400]: kubelet I0213 04:27:25.024872 132400 scope.go:115] "RemoveContainer" containerID="72d2c530e98e12ad38d1294eb31771e038a0d5ac31816a6f40e49b12cc5977e8" Feb 13 04:27:25 localhost.localdomain microshift[132400]: kubelet I0213 04:27:25.025173 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:27:25 localhost.localdomain microshift[132400]: kubelet E0213 04:27:25.025396 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:27:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:28.286565 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:28 localhost.localdomain microshift[132400]: kubelet I0213 04:27:28.663531 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:27:28 localhost.localdomain microshift[132400]: kubelet E0213 04:27:28.663874 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:27:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:33.286633 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:34 localhost.localdomain microshift[132400]: kubelet I0213 04:27:34.665273 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:27:34 localhost.localdomain microshift[132400]: kubelet E0213 04:27:34.665734 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:27:37 localhost.localdomain microshift[132400]: kubelet I0213 04:27:37.664069 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:27:37 localhost.localdomain microshift[132400]: kubelet E0213 04:27:37.664991 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:27:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:38.287240 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:38 localhost.localdomain microshift[132400]: kubelet I0213 04:27:38.664156 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:27:38 localhost.localdomain microshift[132400]: kubelet E0213 04:27:38.664940 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:27:41 localhost.localdomain microshift[132400]: kubelet I0213 04:27:41.663621 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:27:42 localhost.localdomain microshift[132400]: kubelet I0213 04:27:42.055634 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07} Feb 13 04:27:42 localhost.localdomain microshift[132400]: kubelet I0213 04:27:42.056549 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:27:43 localhost.localdomain microshift[132400]: kubelet I0213 04:27:43.056308 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:27:43 localhost.localdomain microshift[132400]: kubelet I0213 04:27:43.056703 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:27:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:43.286619 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:44 localhost.localdomain microshift[132400]: kubelet I0213 04:27:44.057977 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:27:44 localhost.localdomain microshift[132400]: kubelet I0213 04:27:44.058322 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:27:45 localhost.localdomain microshift[132400]: kubelet I0213 04:27:45.062115 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" exitCode=1 Feb 13 04:27:45 localhost.localdomain microshift[132400]: kubelet I0213 04:27:45.062150 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07} Feb 13 04:27:45 localhost.localdomain microshift[132400]: kubelet I0213 04:27:45.062172 132400 scope.go:115] "RemoveContainer" containerID="fd57ecab445e487528a9ca542934f62d312bdb3d3d81d94cc10e3e8bd2c7a703" Feb 13 04:27:45 localhost.localdomain microshift[132400]: kubelet I0213 04:27:45.062409 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:27:45 localhost.localdomain microshift[132400]: kubelet E0213 04:27:45.062787 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:27:45 localhost.localdomain microshift[132400]: kubelet I0213 04:27:45.663515 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:27:45 localhost.localdomain microshift[132400]: kubelet E0213 04:27:45.663921 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:27:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:48.286207 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:50 localhost.localdomain microshift[132400]: kubelet I0213 04:27:50.664221 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:27:50 localhost.localdomain microshift[132400]: kubelet E0213 04:27:50.665002 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:27:51 localhost.localdomain microshift[132400]: kubelet I0213 04:27:51.663955 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:27:51 localhost.localdomain microshift[132400]: kubelet E0213 04:27:51.664344 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:27:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:53.287419 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:27:57 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:27:57.326179 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:27:57 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:27:57.326434 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:27:57 localhost.localdomain microshift[132400]: kubelet I0213 04:27:57.664051 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:27:57 localhost.localdomain microshift[132400]: kubelet E0213 04:27:57.664568 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:27:57 localhost.localdomain microshift[132400]: kubelet I0213 04:27:57.664776 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:27:57 localhost.localdomain microshift[132400]: kubelet E0213 04:27:57.665290 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:27:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:27:58.286513 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:02 localhost.localdomain microshift[132400]: kubelet I0213 04:28:02.663735 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:28:02 localhost.localdomain microshift[132400]: kubelet E0213 04:28:02.663982 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:28:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:03.286285 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:04 localhost.localdomain microshift[132400]: kubelet I0213 04:28:04.663767 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:28:04 localhost.localdomain microshift[132400]: kubelet E0213 04:28:04.664380 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:28:06 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:28:06.411156 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:28:06 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:28:06.411182 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:28:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:08.286270 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:10 localhost.localdomain microshift[132400]: kubelet I0213 04:28:10.664694 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:28:10 localhost.localdomain microshift[132400]: kubelet E0213 04:28:10.665336 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:28:12 localhost.localdomain microshift[132400]: kubelet I0213 04:28:12.663811 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:28:12 localhost.localdomain microshift[132400]: kubelet E0213 04:28:12.664384 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:28:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:13.286855 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:14 localhost.localdomain microshift[132400]: kubelet I0213 04:28:14.665258 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:28:14 localhost.localdomain microshift[132400]: kubelet E0213 04:28:14.665509 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:28:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:18.287009 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:19 localhost.localdomain microshift[132400]: kubelet I0213 04:28:19.241015 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:28:19 localhost.localdomain microshift[132400]: kubelet E0213 04:28:19.241332 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:30:21.241319616 -0500 EST m=+1508.421665884 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:28:19 localhost.localdomain microshift[132400]: kubelet I0213 04:28:19.663672 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:28:19 localhost.localdomain microshift[132400]: kubelet E0213 04:28:19.663862 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:28:20 localhost.localdomain microshift[132400]: kubelet I0213 04:28:20.902230 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:28:20 localhost.localdomain microshift[132400]: kubelet I0213 04:28:20.902938 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:28:20 localhost.localdomain microshift[132400]: kubelet E0213 04:28:20.903290 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.298283 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/dns/9.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.300506 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/kube-rbac-proxy/3.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.348064 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.397143 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-85d64c4987-bbdnr_41b0089d-73d0-450a-84f5-8bfec82d97f9/router/2.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.447736 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/ovnkube-master/3.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.464394 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/northd/3.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.468018 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/nbdb/4.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.475875 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/sbdb/3.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.530934 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6gpbh_0390852d-4e2a-4c00-9b0f-cbf1945008a2/ovn-controller/3.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.583319 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-7bd9547b57-vhmkf_2e7bce65-b199-4d8a-bc2f-c63494419251/service-ca-controller/10.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.639348 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/csi-provisioner/3.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.643451 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/csi-resizer/3.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.648845 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/liveness-probe/3.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.654158 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/self-signed-cert-generator/2.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.657599 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/topolvm-controller/11.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.666205 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet E0213 04:28:22.666783 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.737407 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/liveness-probe/3.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.739921 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/file-checker/2.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.741993 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/lvmd/3.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.743714 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/topolvm-node/11.log" Feb 13 04:28:22 localhost.localdomain microshift[132400]: kubelet I0213 04:28:22.746313 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/csi-registrar/3.log" Feb 13 04:28:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:23.287125 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:23 localhost.localdomain microshift[132400]: kubelet I0213 04:28:23.663932 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:28:23 localhost.localdomain microshift[132400]: kubelet E0213 04:28:23.664560 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:28:25 localhost.localdomain microshift[132400]: kubelet E0213 04:28:25.924296 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:28:25 localhost.localdomain microshift[132400]: kubelet E0213 04:28:25.924324 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:28:26 localhost.localdomain microshift[132400]: kubelet I0213 04:28:26.192894 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:28:26 localhost.localdomain microshift[132400]: kubelet I0213 04:28:26.193269 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:28:26 localhost.localdomain microshift[132400]: kubelet E0213 04:28:26.193638 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:28:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:28.287257 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:30 localhost.localdomain microshift[132400]: kubelet I0213 04:28:30.663933 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:28:30 localhost.localdomain microshift[132400]: kubelet E0213 04:28:30.664129 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:28:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:33.286727 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:33 localhost.localdomain microshift[132400]: kubelet I0213 04:28:33.663871 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:28:33 localhost.localdomain microshift[132400]: kubelet E0213 04:28:33.664502 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:28:35 localhost.localdomain microshift[132400]: kubelet I0213 04:28:35.667010 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:28:35 localhost.localdomain microshift[132400]: kubelet E0213 04:28:35.667235 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:28:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:38.286732 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:40 localhost.localdomain microshift[132400]: kubelet I0213 04:28:40.664928 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:28:40 localhost.localdomain microshift[132400]: kubelet E0213 04:28:40.665611 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:28:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:43.287094 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:45 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:28:45.217426 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:28:45 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:28:45.217452 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:28:45 localhost.localdomain microshift[132400]: kubelet I0213 04:28:45.666229 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:28:45 localhost.localdomain microshift[132400]: kubelet E0213 04:28:45.666633 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:28:45 localhost.localdomain microshift[132400]: kubelet I0213 04:28:45.666924 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:28:45 localhost.localdomain microshift[132400]: kubelet E0213 04:28:45.667108 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:28:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:48.286858 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:49 localhost.localdomain microshift[132400]: kubelet I0213 04:28:49.663729 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:28:49 localhost.localdomain microshift[132400]: kubelet E0213 04:28:49.663993 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:28:51 localhost.localdomain microshift[132400]: kubelet I0213 04:28:51.664163 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:28:51 localhost.localdomain microshift[132400]: kubelet E0213 04:28:51.664894 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:28:53 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:28:53.072293 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:28:53 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:28:53.072316 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.203756 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/dns/9.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.206952 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/kube-rbac-proxy/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.256468 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:53.286769 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.313133 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-85d64c4987-bbdnr_41b0089d-73d0-450a-84f5-8bfec82d97f9/router/2.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.364916 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/northd/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.368040 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/nbdb/4.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.377380 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/sbdb/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.400702 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/ovnkube-master/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.491298 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6gpbh_0390852d-4e2a-4c00-9b0f-cbf1945008a2/ovn-controller/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.544816 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-7bd9547b57-vhmkf_2e7bce65-b199-4d8a-bc2f-c63494419251/service-ca-controller/10.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.601301 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/csi-provisioner/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.604299 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/csi-resizer/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.610232 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/liveness-probe/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.616738 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/self-signed-cert-generator/2.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.619709 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/topolvm-controller/11.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.668573 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/csi-registrar/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.674201 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/liveness-probe/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.680853 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/file-checker/2.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.683996 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/lvmd/3.log" Feb 13 04:28:53 localhost.localdomain microshift[132400]: kubelet I0213 04:28:53.686303 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/topolvm-node/11.log" Feb 13 04:28:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:28:58.286934 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:28:59 localhost.localdomain microshift[132400]: kubelet I0213 04:28:59.664225 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:28:59 localhost.localdomain microshift[132400]: kubelet E0213 04:28:59.664882 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:29:00 localhost.localdomain microshift[132400]: kubelet I0213 04:29:00.663794 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:29:00 localhost.localdomain microshift[132400]: kubelet E0213 04:29:00.663956 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:29:02 localhost.localdomain microshift[132400]: kubelet I0213 04:29:02.663498 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:29:03 localhost.localdomain microshift[132400]: kubelet I0213 04:29:03.172560 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:c1caebec88e6df6e26a2b4d0733e0928623f8f2025a0ed6f1f0e847bf7c25d82} Feb 13 04:29:03 localhost.localdomain microshift[132400]: kubelet I0213 04:29:03.173279 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:29:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:03.286250 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:29:06 localhost.localdomain microshift[132400]: kubelet I0213 04:29:06.664719 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:29:06 localhost.localdomain microshift[132400]: kubelet E0213 04:29:06.665596 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:29:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:08.286594 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:29:10 localhost.localdomain microshift[132400]: kubelet I0213 04:29:10.664835 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:29:10 localhost.localdomain microshift[132400]: kubelet E0213 04:29:10.665344 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:29:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:13.286630 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:29:14 localhost.localdomain microshift[132400]: kubelet I0213 04:29:14.664331 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:29:14 localhost.localdomain microshift[132400]: kubelet E0213 04:29:14.664936 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:29:15 localhost.localdomain microshift[132400]: kubelet I0213 04:29:15.347281 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:15 localhost.localdomain microshift[132400]: kubelet I0213 04:29:15.347492 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:18.286598 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:29:18 localhost.localdomain microshift[132400]: kubelet I0213 04:29:18.347728 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:18 localhost.localdomain microshift[132400]: kubelet I0213 04:29:18.347761 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:20 localhost.localdomain microshift[132400]: kubelet I0213 04:29:20.664071 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:29:20 localhost.localdomain microshift[132400]: kubelet E0213 04:29:20.664807 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:29:21 localhost.localdomain microshift[132400]: kubelet I0213 04:29:21.348237 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:21 localhost.localdomain microshift[132400]: kubelet I0213 04:29:21.348292 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:21 localhost.localdomain microshift[132400]: kubelet I0213 04:29:21.664089 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:29:21 localhost.localdomain microshift[132400]: kubelet E0213 04:29:21.664372 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:29:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:23.286858 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:29:24 localhost.localdomain microshift[132400]: kubelet I0213 04:29:24.348378 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:24 localhost.localdomain microshift[132400]: kubelet I0213 04:29:24.348432 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:25 localhost.localdomain microshift[132400]: kubelet I0213 04:29:25.665954 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:29:25 localhost.localdomain microshift[132400]: kubelet E0213 04:29:25.666386 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:29:27 localhost.localdomain microshift[132400]: kubelet I0213 04:29:27.349510 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:27 localhost.localdomain microshift[132400]: kubelet I0213 04:29:27.350160 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:28.287112 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:29:30 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:29:30.139473 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:29:30 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:29:30.139495 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:29:30 localhost.localdomain microshift[132400]: kubelet I0213 04:29:30.350577 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:30 localhost.localdomain microshift[132400]: kubelet I0213 04:29:30.350635 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:32 localhost.localdomain microshift[132400]: kubelet I0213 04:29:32.664169 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:29:32 localhost.localdomain microshift[132400]: kubelet E0213 04:29:32.664459 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:29:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:33.287162 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:29:33 localhost.localdomain microshift[132400]: kubelet I0213 04:29:33.351710 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:33 localhost.localdomain microshift[132400]: kubelet I0213 04:29:33.351783 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:34 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:29:34.183436 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:29:34 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:29:34.183727 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:29:35 localhost.localdomain microshift[132400]: kubelet I0213 04:29:35.664465 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:29:35 localhost.localdomain microshift[132400]: kubelet E0213 04:29:35.665098 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:29:36 localhost.localdomain microshift[132400]: kubelet I0213 04:29:36.352750 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:36 localhost.localdomain microshift[132400]: kubelet I0213 04:29:36.352787 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:37 localhost.localdomain microshift[132400]: kubelet I0213 04:29:37.664433 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:29:37 localhost.localdomain microshift[132400]: kubelet E0213 04:29:37.665120 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:29:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:38.286688 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:29:39 localhost.localdomain microshift[132400]: kubelet I0213 04:29:39.353945 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:39 localhost.localdomain microshift[132400]: kubelet I0213 04:29:39.354523 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:42 localhost.localdomain microshift[132400]: kubelet I0213 04:29:42.355703 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:42 localhost.localdomain microshift[132400]: kubelet I0213 04:29:42.356130 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:43.287229 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:29:45 localhost.localdomain microshift[132400]: kubelet I0213 04:29:45.356525 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:45 localhost.localdomain microshift[132400]: kubelet I0213 04:29:45.356821 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:45 localhost.localdomain microshift[132400]: kubelet I0213 04:29:45.663517 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:29:45 localhost.localdomain microshift[132400]: kubelet E0213 04:29:45.663892 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:29:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:48.287026 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:29:48 localhost.localdomain microshift[132400]: kubelet I0213 04:29:48.357406 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:48 localhost.localdomain microshift[132400]: kubelet I0213 04:29:48.357793 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:50 localhost.localdomain microshift[132400]: kubelet I0213 04:29:50.663587 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:29:50 localhost.localdomain microshift[132400]: kubelet E0213 04:29:50.663929 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:29:51 localhost.localdomain microshift[132400]: kubelet I0213 04:29:51.358919 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:51 localhost.localdomain microshift[132400]: kubelet I0213 04:29:51.358975 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:52 localhost.localdomain microshift[132400]: kubelet I0213 04:29:52.663630 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:29:52 localhost.localdomain microshift[132400]: kubelet E0213 04:29:52.664567 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:29:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:53.286829 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:29:54 localhost.localdomain microshift[132400]: kubelet I0213 04:29:54.359612 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:54 localhost.localdomain microshift[132400]: kubelet I0213 04:29:54.359896 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:57 localhost.localdomain microshift[132400]: kubelet I0213 04:29:57.360103 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:29:57 localhost.localdomain microshift[132400]: kubelet I0213 04:29:57.360447 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:29:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:29:58.286865 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:00 localhost.localdomain microshift[132400]: kubelet I0213 04:30:00.361062 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:00 localhost.localdomain microshift[132400]: kubelet I0213 04:30:00.361497 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:00 localhost.localdomain microshift[132400]: kubelet I0213 04:30:00.663964 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:30:00 localhost.localdomain microshift[132400]: kubelet E0213 04:30:00.664258 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.329989 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/kube-rbac-proxy/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.332042 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/dns/10.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.404249 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.451208 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-85d64c4987-bbdnr_41b0089d-73d0-450a-84f5-8bfec82d97f9/router/2.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.509937 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/ovnkube-master/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.541073 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/northd/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.549383 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/nbdb/4.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.556876 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/sbdb/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.634305 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6gpbh_0390852d-4e2a-4c00-9b0f-cbf1945008a2/ovn-controller/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.694961 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-7bd9547b57-vhmkf_2e7bce65-b199-4d8a-bc2f-c63494419251/service-ca-controller/10.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.750127 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/self-signed-cert-generator/2.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.753250 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/topolvm-controller/11.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.756676 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/csi-provisioner/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.760681 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/csi-resizer/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.765137 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/liveness-probe/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.821337 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/lvmd/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.824170 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/topolvm-node/11.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.827300 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/csi-registrar/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.831576 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/liveness-probe/3.log" Feb 13 04:30:01 localhost.localdomain microshift[132400]: kubelet I0213 04:30:01.835411 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/file-checker/2.log" Feb 13 04:30:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:03.286906 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:03 localhost.localdomain microshift[132400]: kubelet I0213 04:30:03.362809 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:03 localhost.localdomain microshift[132400]: kubelet I0213 04:30:03.362958 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:05 localhost.localdomain microshift[132400]: kubelet I0213 04:30:05.668793 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:30:05 localhost.localdomain microshift[132400]: kubelet E0213 04:30:05.669196 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:30:06 localhost.localdomain microshift[132400]: kubelet I0213 04:30:06.363997 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:06 localhost.localdomain microshift[132400]: kubelet I0213 04:30:06.364217 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:06 localhost.localdomain microshift[132400]: kubelet I0213 04:30:06.665090 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:30:06 localhost.localdomain microshift[132400]: kubelet E0213 04:30:06.665237 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:30:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:08.286786 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:09 localhost.localdomain microshift[132400]: kubelet I0213 04:30:09.365021 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:09 localhost.localdomain microshift[132400]: kubelet I0213 04:30:09.365433 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:10 localhost.localdomain microshift[132400]: kubelet I0213 04:30:10.495902 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/kube-rbac-proxy/3.log" Feb 13 04:30:10 localhost.localdomain microshift[132400]: kubelet I0213 04:30:10.498242 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/dns/10.log" Feb 13 04:30:10 localhost.localdomain microshift[132400]: kubelet I0213 04:30:10.551935 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 04:30:10 localhost.localdomain microshift[132400]: kubelet I0213 04:30:10.601174 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-85d64c4987-bbdnr_41b0089d-73d0-450a-84f5-8bfec82d97f9/router/2.log" Feb 13 04:30:10 localhost.localdomain microshift[132400]: kubelet I0213 04:30:10.659880 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/northd/3.log" Feb 13 04:30:10 localhost.localdomain microshift[132400]: kubelet I0213 04:30:10.663128 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/nbdb/4.log" Feb 13 04:30:10 localhost.localdomain microshift[132400]: kubelet I0213 04:30:10.668939 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/sbdb/3.log" Feb 13 04:30:10 localhost.localdomain microshift[132400]: kubelet I0213 04:30:10.676662 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/ovnkube-master/3.log" Feb 13 04:30:11 localhost.localdomain microshift[132400]: kubelet I0213 04:30:11.664300 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:30:11 localhost.localdomain microshift[132400]: kubelet E0213 04:30:11.664596 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:30:12 localhost.localdomain microshift[132400]: kubelet I0213 04:30:12.366502 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:12 localhost.localdomain microshift[132400]: kubelet I0213 04:30:12.366552 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:13.286949 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:14 localhost.localdomain microshift[132400]: kubelet I0213 04:30:14.632497 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:14 localhost.localdomain microshift[132400]: kubelet I0213 04:30:14.632829 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:15 localhost.localdomain microshift[132400]: kubelet I0213 04:30:15.367370 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:15 localhost.localdomain microshift[132400]: kubelet I0213 04:30:15.367636 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:15 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:30:15.494184 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:30:15 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:30:15.494205 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:30:17 localhost.localdomain microshift[132400]: kubelet I0213 04:30:17.664282 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:30:17 localhost.localdomain microshift[132400]: kubelet E0213 04:30:17.665309 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:30:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:18.287148 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:18 localhost.localdomain microshift[132400]: kubelet I0213 04:30:18.368810 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:18 localhost.localdomain microshift[132400]: kubelet I0213 04:30:18.369011 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:20 localhost.localdomain microshift[132400]: kubelet I0213 04:30:20.664188 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:30:20 localhost.localdomain microshift[132400]: kubelet E0213 04:30:20.664579 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:30:21 localhost.localdomain microshift[132400]: kubelet I0213 04:30:21.260849 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:30:21 localhost.localdomain microshift[132400]: kubelet E0213 04:30:21.261049 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:32:23.261030484 -0500 EST m=+1630.441376777 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:30:21 localhost.localdomain microshift[132400]: kubelet I0213 04:30:21.369645 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:21 localhost.localdomain microshift[132400]: kubelet I0213 04:30:21.369795 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:23 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:30:23.109800 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:30:23 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:30:23.109967 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:30:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:23.286461 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:24 localhost.localdomain microshift[132400]: kubelet I0213 04:30:24.370138 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:24 localhost.localdomain microshift[132400]: kubelet I0213 04:30:24.370474 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:24 localhost.localdomain microshift[132400]: kubelet I0213 04:30:24.631448 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:24 localhost.localdomain microshift[132400]: kubelet I0213 04:30:24.631750 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:25 localhost.localdomain microshift[132400]: kubelet I0213 04:30:25.663855 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:30:25 localhost.localdomain microshift[132400]: kubelet E0213 04:30:25.674531 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:30:27 localhost.localdomain microshift[132400]: kubelet I0213 04:30:27.370937 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:27 localhost.localdomain microshift[132400]: kubelet I0213 04:30:27.371491 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:28.286756 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:29 localhost.localdomain microshift[132400]: kubelet E0213 04:30:29.120745 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:30:29 localhost.localdomain microshift[132400]: kubelet E0213 04:30:29.120776 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:30:29 localhost.localdomain microshift[132400]: kubelet I0213 04:30:29.664085 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:30:29 localhost.localdomain microshift[132400]: kubelet E0213 04:30:29.664373 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:30:30 localhost.localdomain microshift[132400]: kubelet I0213 04:30:30.372063 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:30 localhost.localdomain microshift[132400]: kubelet I0213 04:30:30.372327 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:32 localhost.localdomain microshift[132400]: kubelet I0213 04:30:32.664001 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:30:32 localhost.localdomain microshift[132400]: kubelet E0213 04:30:32.664383 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:30:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:33.286422 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:33 localhost.localdomain microshift[132400]: kubelet I0213 04:30:33.372648 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:33 localhost.localdomain microshift[132400]: kubelet I0213 04:30:33.372889 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:34 localhost.localdomain microshift[132400]: kubelet I0213 04:30:34.632315 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:34 localhost.localdomain microshift[132400]: kubelet I0213 04:30:34.632754 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:36 localhost.localdomain microshift[132400]: kubelet I0213 04:30:36.373950 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:36 localhost.localdomain microshift[132400]: kubelet I0213 04:30:36.373988 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:36 localhost.localdomain microshift[132400]: kubelet I0213 04:30:36.665616 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:30:36 localhost.localdomain microshift[132400]: kubelet E0213 04:30:36.665886 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:30:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:38.286866 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:39 localhost.localdomain microshift[132400]: kubelet I0213 04:30:39.375148 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:39 localhost.localdomain microshift[132400]: kubelet I0213 04:30:39.375200 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:40 localhost.localdomain microshift[132400]: kubelet I0213 04:30:40.664935 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:30:40 localhost.localdomain microshift[132400]: kubelet E0213 04:30:40.665209 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:30:42 localhost.localdomain microshift[132400]: kubelet I0213 04:30:42.375622 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:42 localhost.localdomain microshift[132400]: kubelet I0213 04:30:42.375681 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:43.286762 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:44 localhost.localdomain microshift[132400]: kubelet I0213 04:30:44.631027 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:44 localhost.localdomain microshift[132400]: kubelet I0213 04:30:44.631066 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:45 localhost.localdomain microshift[132400]: kubelet I0213 04:30:45.376477 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:45 localhost.localdomain microshift[132400]: kubelet I0213 04:30:45.376744 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:45 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:30:45.693117 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:30:45 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:30:45.693136 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:30:46 localhost.localdomain microshift[132400]: kubelet I0213 04:30:46.665481 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:30:46 localhost.localdomain microshift[132400]: kubelet E0213 04:30:46.665744 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:30:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:48.286339 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:48 localhost.localdomain microshift[132400]: kubelet I0213 04:30:48.377614 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:48 localhost.localdomain microshift[132400]: kubelet I0213 04:30:48.377666 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:49 localhost.localdomain microshift[132400]: kubelet I0213 04:30:49.664280 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:30:49 localhost.localdomain microshift[132400]: kubelet E0213 04:30:49.664582 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:30:51 localhost.localdomain microshift[132400]: kubelet I0213 04:30:51.377866 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:51 localhost.localdomain microshift[132400]: kubelet I0213 04:30:51.377973 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:53.287175 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:30:54 localhost.localdomain microshift[132400]: kubelet I0213 04:30:54.378785 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:54 localhost.localdomain microshift[132400]: kubelet I0213 04:30:54.378837 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:54 localhost.localdomain microshift[132400]: kubelet I0213 04:30:54.632776 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:54 localhost.localdomain microshift[132400]: kubelet I0213 04:30:54.632815 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:54 localhost.localdomain microshift[132400]: kubelet I0213 04:30:54.632836 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:30:54 localhost.localdomain microshift[132400]: kubelet I0213 04:30:54.633149 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:c1caebec88e6df6e26a2b4d0733e0928623f8f2025a0ed6f1f0e847bf7c25d82} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:30:54 localhost.localdomain microshift[132400]: kubelet I0213 04:30:54.633230 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://c1caebec88e6df6e26a2b4d0733e0928623f8f2025a0ed6f1f0e847bf7c25d82" gracePeriod=30 Feb 13 04:30:54 localhost.localdomain microshift[132400]: kubelet I0213 04:30:54.665330 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:30:54 localhost.localdomain microshift[132400]: kubelet E0213 04:30:54.665735 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:30:57 localhost.localdomain microshift[132400]: kubelet I0213 04:30:57.379281 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:30:57 localhost.localdomain microshift[132400]: kubelet I0213 04:30:57.379627 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:30:57 localhost.localdomain microshift[132400]: kubelet I0213 04:30:57.663521 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:30:57 localhost.localdomain microshift[132400]: kubelet E0213 04:30:57.663705 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:30:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:30:58.286985 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:00 localhost.localdomain microshift[132400]: kubelet I0213 04:31:00.380406 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:00 localhost.localdomain microshift[132400]: kubelet I0213 04:31:00.380746 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:01 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:31:01.787996 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:31:01 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:31:01.788022 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:31:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:03.286447 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:03 localhost.localdomain microshift[132400]: kubelet I0213 04:31:03.380891 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:03 localhost.localdomain microshift[132400]: kubelet I0213 04:31:03.380930 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:03 localhost.localdomain microshift[132400]: kubelet I0213 04:31:03.664262 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:31:03 localhost.localdomain microshift[132400]: kubelet E0213 04:31:03.664725 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:31:06 localhost.localdomain microshift[132400]: kubelet I0213 04:31:06.381388 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:06 localhost.localdomain microshift[132400]: kubelet I0213 04:31:06.381434 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:08.286211 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:08 localhost.localdomain microshift[132400]: kubelet I0213 04:31:08.663726 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:31:08 localhost.localdomain microshift[132400]: kubelet E0213 04:31:08.664363 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:31:09 localhost.localdomain microshift[132400]: kubelet I0213 04:31:09.382639 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:09 localhost.localdomain microshift[132400]: kubelet I0213 04:31:09.382919 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:10 localhost.localdomain microshift[132400]: kubelet I0213 04:31:10.663947 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:31:10 localhost.localdomain microshift[132400]: kubelet E0213 04:31:10.664133 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:31:12 localhost.localdomain microshift[132400]: kubelet I0213 04:31:12.383087 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:12 localhost.localdomain microshift[132400]: kubelet I0213 04:31:12.383615 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:13.286274 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:14 localhost.localdomain microshift[132400]: kubelet I0213 04:31:14.664482 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:31:14 localhost.localdomain microshift[132400]: kubelet E0213 04:31:14.664997 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:31:15 localhost.localdomain microshift[132400]: kubelet I0213 04:31:15.379778 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="c1caebec88e6df6e26a2b4d0733e0928623f8f2025a0ed6f1f0e847bf7c25d82" exitCode=0 Feb 13 04:31:15 localhost.localdomain microshift[132400]: kubelet I0213 04:31:15.379928 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:c1caebec88e6df6e26a2b4d0733e0928623f8f2025a0ed6f1f0e847bf7c25d82} Feb 13 04:31:15 localhost.localdomain microshift[132400]: kubelet I0213 04:31:15.379976 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f} Feb 13 04:31:15 localhost.localdomain microshift[132400]: kubelet I0213 04:31:15.380009 132400 scope.go:115] "RemoveContainer" containerID="1c319bc0e60733122403404c59ab85f1d29e0dffda94a78f3f799843e4b06ed9" Feb 13 04:31:15 localhost.localdomain microshift[132400]: kubelet I0213 04:31:15.383790 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:15 localhost.localdomain microshift[132400]: kubelet I0213 04:31:15.383817 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:15 localhost.localdomain microshift[132400]: kubelet I0213 04:31:15.383844 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:31:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:18.286467 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:21 localhost.localdomain microshift[132400]: kubelet I0213 04:31:21.663558 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:31:21 localhost.localdomain microshift[132400]: kubelet E0213 04:31:21.663784 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:31:22 localhost.localdomain microshift[132400]: kubelet I0213 04:31:22.664334 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:31:22 localhost.localdomain microshift[132400]: kubelet E0213 04:31:22.665146 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:31:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:23.286739 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:23 localhost.localdomain microshift[132400]: kubelet I0213 04:31:23.853909 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/dns/11.log" Feb 13 04:31:23 localhost.localdomain microshift[132400]: kubelet I0213 04:31:23.856223 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/kube-rbac-proxy/3.log" Feb 13 04:31:23 localhost.localdomain microshift[132400]: kubelet I0213 04:31:23.904349 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 04:31:23 localhost.localdomain microshift[132400]: kubelet I0213 04:31:23.952067 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-85d64c4987-bbdnr_41b0089d-73d0-450a-84f5-8bfec82d97f9/router/2.log" Feb 13 04:31:24 localhost.localdomain microshift[132400]: kubelet I0213 04:31:24.003300 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/sbdb/3.log" Feb 13 04:31:24 localhost.localdomain microshift[132400]: kubelet I0213 04:31:24.012832 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/ovnkube-master/3.log" Feb 13 04:31:27 localhost.localdomain microshift[132400]: kubelet I0213 04:31:27.347311 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:27 localhost.localdomain microshift[132400]: kubelet I0213 04:31:27.347637 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:28.287100 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:28 localhost.localdomain microshift[132400]: kubelet I0213 04:31:28.664866 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:31:28 localhost.localdomain microshift[132400]: kubelet E0213 04:31:28.665111 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:31:28 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:31:28.971584 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:31:28 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:31:28.971608 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:31:30 localhost.localdomain microshift[132400]: kubelet I0213 04:31:30.348640 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:30 localhost.localdomain microshift[132400]: kubelet I0213 04:31:30.348728 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:33.286638 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:33 localhost.localdomain microshift[132400]: kubelet I0213 04:31:33.349269 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:33 localhost.localdomain microshift[132400]: kubelet I0213 04:31:33.349316 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:33 localhost.localdomain microshift[132400]: kubelet I0213 04:31:33.663838 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:31:33 localhost.localdomain microshift[132400]: kubelet E0213 04:31:33.664102 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:31:33 localhost.localdomain microshift[132400]: kubelet I0213 04:31:33.664212 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:31:33 localhost.localdomain microshift[132400]: kubelet E0213 04:31:33.664313 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:31:36 localhost.localdomain microshift[132400]: kubelet I0213 04:31:36.350088 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:36 localhost.localdomain microshift[132400]: kubelet I0213 04:31:36.350123 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:36 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:31:36.587755 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:31:36 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:31:36.587780 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:31:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:38.286761 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:38 localhost.localdomain microshift[132400]: kubelet I0213 04:31:38.653509 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/dns/11.log" Feb 13 04:31:38 localhost.localdomain microshift[132400]: kubelet I0213 04:31:38.655757 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/kube-rbac-proxy/3.log" Feb 13 04:31:38 localhost.localdomain microshift[132400]: kubelet I0213 04:31:38.705559 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 04:31:38 localhost.localdomain microshift[132400]: kubelet I0213 04:31:38.752665 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-85d64c4987-bbdnr_41b0089d-73d0-450a-84f5-8bfec82d97f9/router/2.log" Feb 13 04:31:38 localhost.localdomain microshift[132400]: kubelet I0213 04:31:38.804097 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/northd/3.log" Feb 13 04:31:38 localhost.localdomain microshift[132400]: kubelet I0213 04:31:38.812488 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/nbdb/4.log" Feb 13 04:31:38 localhost.localdomain microshift[132400]: kubelet I0213 04:31:38.815765 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/sbdb/3.log" Feb 13 04:31:38 localhost.localdomain microshift[132400]: kubelet I0213 04:31:38.821092 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/ovnkube-master/3.log" Feb 13 04:31:39 localhost.localdomain microshift[132400]: kubelet I0213 04:31:39.350948 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:39 localhost.localdomain microshift[132400]: kubelet I0213 04:31:39.351314 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:42 localhost.localdomain microshift[132400]: kubelet I0213 04:31:42.351741 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:42 localhost.localdomain microshift[132400]: kubelet I0213 04:31:42.351784 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:43.286931 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:43 localhost.localdomain microshift[132400]: kubelet I0213 04:31:43.663981 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:31:43 localhost.localdomain microshift[132400]: kubelet E0213 04:31:43.664273 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:31:44 localhost.localdomain microshift[132400]: kubelet I0213 04:31:44.664388 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:31:45 localhost.localdomain microshift[132400]: kubelet I0213 04:31:45.352279 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:45 localhost.localdomain microshift[132400]: kubelet I0213 04:31:45.352444 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:45 localhost.localdomain microshift[132400]: kubelet I0213 04:31:45.426375 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d} Feb 13 04:31:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:48.287276 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:48 localhost.localdomain microshift[132400]: kubelet I0213 04:31:48.352652 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:48 localhost.localdomain microshift[132400]: kubelet I0213 04:31:48.352856 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:48 localhost.localdomain microshift[132400]: kubelet I0213 04:31:48.664019 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:31:48 localhost.localdomain microshift[132400]: kubelet E0213 04:31:48.664397 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:31:51 localhost.localdomain microshift[132400]: kubelet I0213 04:31:51.353950 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:51 localhost.localdomain microshift[132400]: kubelet I0213 04:31:51.354217 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:53.287004 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:54 localhost.localdomain microshift[132400]: kubelet I0213 04:31:54.354869 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:54 localhost.localdomain microshift[132400]: kubelet I0213 04:31:54.354928 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:57 localhost.localdomain microshift[132400]: kubelet I0213 04:31:57.356032 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:31:57 localhost.localdomain microshift[132400]: kubelet I0213 04:31:57.356457 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:31:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:31:58.286901 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:31:58 localhost.localdomain microshift[132400]: kubelet I0213 04:31:58.663537 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:31:58 localhost.localdomain microshift[132400]: kubelet E0213 04:31:58.664102 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:32:00 localhost.localdomain microshift[132400]: kubelet I0213 04:32:00.356732 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:00 localhost.localdomain microshift[132400]: kubelet I0213 04:32:00.357254 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:01 localhost.localdomain microshift[132400]: kubelet I0213 04:32:01.663991 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:32:01 localhost.localdomain microshift[132400]: kubelet E0213 04:32:01.664613 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:32:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:03.286220 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:32:03 localhost.localdomain microshift[132400]: kubelet I0213 04:32:03.357647 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:03 localhost.localdomain microshift[132400]: kubelet I0213 04:32:03.357823 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:06 localhost.localdomain microshift[132400]: kubelet I0213 04:32:06.358803 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:06 localhost.localdomain microshift[132400]: kubelet I0213 04:32:06.359165 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:08.286568 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:32:09 localhost.localdomain microshift[132400]: kubelet I0213 04:32:09.359736 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:09 localhost.localdomain microshift[132400]: kubelet I0213 04:32:09.360092 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:11 localhost.localdomain microshift[132400]: kubelet I0213 04:32:11.663499 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:32:11 localhost.localdomain microshift[132400]: kubelet E0213 04:32:11.663823 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:32:12 localhost.localdomain microshift[132400]: kubelet I0213 04:32:12.361231 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:12 localhost.localdomain microshift[132400]: kubelet I0213 04:32:12.361290 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:13 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:32:13.157709 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:32:13 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:32:13.157997 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:32:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:13.287299 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:32:13 localhost.localdomain microshift[132400]: kubelet I0213 04:32:13.663972 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:32:13 localhost.localdomain microshift[132400]: kubelet E0213 04:32:13.664308 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:32:15 localhost.localdomain microshift[132400]: kubelet I0213 04:32:15.361405 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:15 localhost.localdomain microshift[132400]: kubelet I0213 04:32:15.361693 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:18.286518 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:32:18 localhost.localdomain microshift[132400]: kubelet I0213 04:32:18.362214 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:18 localhost.localdomain microshift[132400]: kubelet I0213 04:32:18.362411 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:19 localhost.localdomain microshift[132400]: kubelet I0213 04:32:19.475902 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" exitCode=255 Feb 13 04:32:19 localhost.localdomain microshift[132400]: kubelet I0213 04:32:19.476165 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d} Feb 13 04:32:19 localhost.localdomain microshift[132400]: kubelet I0213 04:32:19.476227 132400 scope.go:115] "RemoveContainer" containerID="d64f01122127e1d927cab60963cff9e0f79ff42dbcddade0187ed2dcd0ae1a82" Feb 13 04:32:19 localhost.localdomain microshift[132400]: kubelet I0213 04:32:19.476432 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:32:19 localhost.localdomain microshift[132400]: kubelet E0213 04:32:19.476613 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:32:21 localhost.localdomain microshift[132400]: kubelet I0213 04:32:21.362788 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:21 localhost.localdomain microshift[132400]: kubelet I0213 04:32:21.362847 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:23.286184 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:32:23 localhost.localdomain microshift[132400]: kubelet I0213 04:32:23.344913 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:32:23 localhost.localdomain microshift[132400]: kubelet E0213 04:32:23.345041 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:34:25.345030293 -0500 EST m=+1752.525376562 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:32:24 localhost.localdomain microshift[132400]: kubelet I0213 04:32:24.363873 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:24 localhost.localdomain microshift[132400]: kubelet I0213 04:32:24.364195 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:24 localhost.localdomain microshift[132400]: kubelet I0213 04:32:24.631919 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:24 localhost.localdomain microshift[132400]: kubelet I0213 04:32:24.631975 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:26 localhost.localdomain microshift[132400]: kubelet I0213 04:32:26.665459 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:32:26 localhost.localdomain microshift[132400]: kubelet I0213 04:32:26.666014 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:32:26 localhost.localdomain microshift[132400]: kubelet E0213 04:32:26.666254 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:32:27 localhost.localdomain microshift[132400]: kubelet I0213 04:32:27.364727 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:27 localhost.localdomain microshift[132400]: kubelet I0213 04:32:27.364792 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:27 localhost.localdomain microshift[132400]: kubelet I0213 04:32:27.491562 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416} Feb 13 04:32:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:28.287221 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:32:30 localhost.localdomain microshift[132400]: kubelet I0213 04:32:30.366126 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:30 localhost.localdomain microshift[132400]: kubelet I0213 04:32:30.366190 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:30 localhost.localdomain microshift[132400]: kubelet I0213 04:32:30.497750 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" exitCode=1 Feb 13 04:32:30 localhost.localdomain microshift[132400]: kubelet I0213 04:32:30.497782 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416} Feb 13 04:32:30 localhost.localdomain microshift[132400]: kubelet I0213 04:32:30.497808 132400 scope.go:115] "RemoveContainer" containerID="3dd73b0c67ee23c7d90d3d0f7785499c16436ae947001d4d94bb3d2fdd3c84ba" Feb 13 04:32:30 localhost.localdomain microshift[132400]: kubelet I0213 04:32:30.498327 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:32:30 localhost.localdomain microshift[132400]: kubelet E0213 04:32:30.498732 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:32:31 localhost.localdomain microshift[132400]: kubelet I0213 04:32:31.663389 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:32:31 localhost.localdomain microshift[132400]: kubelet E0213 04:32:31.663586 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:32:32 localhost.localdomain microshift[132400]: kubelet E0213 04:32:32.306232 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:32:32 localhost.localdomain microshift[132400]: kubelet E0213 04:32:32.306550 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:32:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:33.287204 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:32:33 localhost.localdomain microshift[132400]: kubelet I0213 04:32:33.366891 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:33 localhost.localdomain microshift[132400]: kubelet I0213 04:32:33.367096 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:34 localhost.localdomain microshift[132400]: kubelet I0213 04:32:34.631297 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:34 localhost.localdomain microshift[132400]: kubelet I0213 04:32:34.631650 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:34 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:32:34.848959 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:32:34 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:32:34.849112 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:32:36 localhost.localdomain microshift[132400]: kubelet I0213 04:32:36.368083 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:36 localhost.localdomain microshift[132400]: kubelet I0213 04:32:36.368403 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:37 localhost.localdomain microshift[132400]: kubelet I0213 04:32:37.664463 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:32:37 localhost.localdomain microshift[132400]: kubelet E0213 04:32:37.664980 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:32:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:38.286888 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:32:39 localhost.localdomain microshift[132400]: kubelet I0213 04:32:39.369204 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:39 localhost.localdomain microshift[132400]: kubelet I0213 04:32:39.369644 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:42 localhost.localdomain microshift[132400]: kubelet I0213 04:32:42.370348 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:42 localhost.localdomain microshift[132400]: kubelet I0213 04:32:42.370400 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:43.286914 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:32:43 localhost.localdomain microshift[132400]: kubelet I0213 04:32:43.663970 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:32:43 localhost.localdomain microshift[132400]: kubelet E0213 04:32:43.664250 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:32:44 localhost.localdomain microshift[132400]: kubelet I0213 04:32:44.631845 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:44 localhost.localdomain microshift[132400]: kubelet I0213 04:32:44.631912 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:45 localhost.localdomain microshift[132400]: kubelet I0213 04:32:45.371379 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:45 localhost.localdomain microshift[132400]: kubelet I0213 04:32:45.371430 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:45 localhost.localdomain microshift[132400]: kubelet I0213 04:32:45.669858 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:32:45 localhost.localdomain microshift[132400]: kubelet E0213 04:32:45.669990 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:32:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:48.286295 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:32:48 localhost.localdomain microshift[132400]: kubelet I0213 04:32:48.372023 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:48 localhost.localdomain microshift[132400]: kubelet I0213 04:32:48.372263 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:48 localhost.localdomain microshift[132400]: kubelet I0213 04:32:48.663411 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:32:49 localhost.localdomain microshift[132400]: kubelet I0213 04:32:49.530863 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45} Feb 13 04:32:49 localhost.localdomain microshift[132400]: kubelet I0213 04:32:49.531543 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:32:50 localhost.localdomain microshift[132400]: kubelet I0213 04:32:50.531640 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:50 localhost.localdomain microshift[132400]: kubelet I0213 04:32:50.531760 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:51 localhost.localdomain microshift[132400]: kubelet I0213 04:32:51.373062 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:51 localhost.localdomain microshift[132400]: kubelet I0213 04:32:51.373117 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:51 localhost.localdomain microshift[132400]: kubelet I0213 04:32:51.533707 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:51 localhost.localdomain microshift[132400]: kubelet I0213 04:32:51.533780 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:52 localhost.localdomain microshift[132400]: kubelet I0213 04:32:52.537905 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" exitCode=1 Feb 13 04:32:52 localhost.localdomain microshift[132400]: kubelet I0213 04:32:52.537938 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45} Feb 13 04:32:52 localhost.localdomain microshift[132400]: kubelet I0213 04:32:52.537960 132400 scope.go:115] "RemoveContainer" containerID="71a9158d64a9af4d27d104a42c5d6e658ab063e8e8627963591437574a023a07" Feb 13 04:32:52 localhost.localdomain microshift[132400]: kubelet I0213 04:32:52.538214 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:32:52 localhost.localdomain microshift[132400]: kubelet E0213 04:32:52.538480 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:32:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:53.287121 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:32:54 localhost.localdomain microshift[132400]: kubelet I0213 04:32:54.374302 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:54 localhost.localdomain microshift[132400]: kubelet I0213 04:32:54.374732 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:54 localhost.localdomain microshift[132400]: kubelet I0213 04:32:54.631842 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:54 localhost.localdomain microshift[132400]: kubelet I0213 04:32:54.632049 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:55 localhost.localdomain microshift[132400]: kubelet I0213 04:32:55.667169 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:32:55 localhost.localdomain microshift[132400]: kubelet E0213 04:32:55.667571 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:32:57 localhost.localdomain microshift[132400]: kubelet I0213 04:32:57.375053 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:32:57 localhost.localdomain microshift[132400]: kubelet I0213 04:32:57.375130 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:32:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:32:58.286807 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:00 localhost.localdomain microshift[132400]: kubelet I0213 04:33:00.375624 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:33:00 localhost.localdomain microshift[132400]: kubelet I0213 04:33:00.375685 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:33:01 localhost.localdomain microshift[132400]: kubelet I0213 04:33:01.663522 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:33:01 localhost.localdomain microshift[132400]: kubelet E0213 04:33:01.664322 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:33:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:03.286706 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:03 localhost.localdomain microshift[132400]: kubelet I0213 04:33:03.376302 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:33:03 localhost.localdomain microshift[132400]: kubelet I0213 04:33:03.376363 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:33:04 localhost.localdomain microshift[132400]: kubelet I0213 04:33:04.632205 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:33:04 localhost.localdomain microshift[132400]: kubelet I0213 04:33:04.632549 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:33:04 localhost.localdomain microshift[132400]: kubelet I0213 04:33:04.632620 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:33:04 localhost.localdomain microshift[132400]: kubelet I0213 04:33:04.633032 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:33:04 localhost.localdomain microshift[132400]: kubelet I0213 04:33:04.633176 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" gracePeriod=30 Feb 13 04:33:05 localhost.localdomain microshift[132400]: kubelet I0213 04:33:05.663254 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:33:05 localhost.localdomain microshift[132400]: kubelet E0213 04:33:05.663544 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:33:06 localhost.localdomain microshift[132400]: kubelet I0213 04:33:06.377352 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:33:06 localhost.localdomain microshift[132400]: kubelet I0213 04:33:06.377424 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:33:07 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:33:07.861365 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:33:07 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:33:07.861727 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:33:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:08.286800 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:09 localhost.localdomain microshift[132400]: kubelet I0213 04:33:09.378343 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:33:09 localhost.localdomain microshift[132400]: kubelet I0213 04:33:09.378867 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:33:09 localhost.localdomain microshift[132400]: kubelet I0213 04:33:09.663776 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:33:09 localhost.localdomain microshift[132400]: kubelet E0213 04:33:09.664223 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:33:12 localhost.localdomain microshift[132400]: kubelet I0213 04:33:12.379953 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:33:12 localhost.localdomain microshift[132400]: kubelet I0213 04:33:12.380015 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:33:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:13.286418 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:13 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:33:13.759032 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:33:13 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:33:13.759058 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:33:15 localhost.localdomain microshift[132400]: kubelet I0213 04:33:15.380814 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:33:15 localhost.localdomain microshift[132400]: kubelet I0213 04:33:15.380867 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:33:15 localhost.localdomain microshift[132400]: kubelet I0213 04:33:15.664148 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:33:15 localhost.localdomain microshift[132400]: kubelet E0213 04:33:15.664317 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:33:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:18.286927 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:18 localhost.localdomain microshift[132400]: kubelet I0213 04:33:18.381135 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:33:18 localhost.localdomain microshift[132400]: kubelet I0213 04:33:18.381194 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:33:18 localhost.localdomain microshift[132400]: kubelet I0213 04:33:18.663597 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:33:18 localhost.localdomain microshift[132400]: kubelet E0213 04:33:18.664114 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:33:20 localhost.localdomain microshift[132400]: kubelet I0213 04:33:20.902077 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:33:20 localhost.localdomain microshift[132400]: kubelet I0213 04:33:20.902927 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:33:20 localhost.localdomain microshift[132400]: kubelet E0213 04:33:20.903274 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:33:21 localhost.localdomain microshift[132400]: kubelet I0213 04:33:21.382051 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:33:21 localhost.localdomain microshift[132400]: kubelet I0213 04:33:21.382099 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:33:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:23.287132 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:24 localhost.localdomain microshift[132400]: kubelet I0213 04:33:24.383064 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:33:24 localhost.localdomain microshift[132400]: kubelet I0213 04:33:24.383437 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:33:24 localhost.localdomain microshift[132400]: kubelet E0213 04:33:24.743976 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:33:25 localhost.localdomain microshift[132400]: kubelet I0213 04:33:25.591730 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" exitCode=0 Feb 13 04:33:25 localhost.localdomain microshift[132400]: kubelet I0213 04:33:25.591987 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f} Feb 13 04:33:25 localhost.localdomain microshift[132400]: kubelet I0213 04:33:25.592098 132400 scope.go:115] "RemoveContainer" containerID="c1caebec88e6df6e26a2b4d0733e0928623f8f2025a0ed6f1f0e847bf7c25d82" Feb 13 04:33:25 localhost.localdomain microshift[132400]: kubelet I0213 04:33:25.592331 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:33:25 localhost.localdomain microshift[132400]: kubelet E0213 04:33:25.592551 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:33:26 localhost.localdomain microshift[132400]: kubelet I0213 04:33:26.192739 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:33:26 localhost.localdomain microshift[132400]: kubelet I0213 04:33:26.193174 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:33:26 localhost.localdomain microshift[132400]: kubelet E0213 04:33:26.193545 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:33:27 localhost.localdomain microshift[132400]: kubelet I0213 04:33:27.384014 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:33:27 localhost.localdomain microshift[132400]: kubelet I0213 04:33:27.384326 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:33:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:28.287062 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:30 localhost.localdomain microshift[132400]: kubelet I0213 04:33:30.663386 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:33:30 localhost.localdomain microshift[132400]: kubelet E0213 04:33:30.663890 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:33:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:33.286644 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:35 localhost.localdomain microshift[132400]: kubelet I0213 04:33:35.671728 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:33:35 localhost.localdomain microshift[132400]: kubelet E0213 04:33:35.672029 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:33:37 localhost.localdomain microshift[132400]: kubelet I0213 04:33:37.664271 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:33:37 localhost.localdomain microshift[132400]: kubelet E0213 04:33:37.664506 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:33:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:38.286913 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:39 localhost.localdomain microshift[132400]: kubelet I0213 04:33:39.663615 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:33:39 localhost.localdomain microshift[132400]: kubelet E0213 04:33:39.663931 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:33:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:43.286882 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:45 localhost.localdomain microshift[132400]: kubelet I0213 04:33:45.669494 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:33:45 localhost.localdomain microshift[132400]: kubelet E0213 04:33:45.669730 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:33:47 localhost.localdomain microshift[132400]: kubelet I0213 04:33:47.664023 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:33:47 localhost.localdomain microshift[132400]: kubelet E0213 04:33:47.664293 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:33:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:48.286283 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:51 localhost.localdomain microshift[132400]: kubelet I0213 04:33:51.663862 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:33:51 localhost.localdomain microshift[132400]: kubelet I0213 04:33:51.664264 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:33:51 localhost.localdomain microshift[132400]: kubelet E0213 04:33:51.664468 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:33:51 localhost.localdomain microshift[132400]: kubelet E0213 04:33:51.664720 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:33:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:53.286418 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:55 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:33:55.242297 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:33:55 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:33:55.242783 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:33:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:33:58.286239 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:33:58 localhost.localdomain microshift[132400]: kubelet I0213 04:33:58.665001 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:33:58 localhost.localdomain microshift[132400]: kubelet E0213 04:33:58.665236 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:34:00 localhost.localdomain microshift[132400]: kubelet I0213 04:34:00.663951 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:34:00 localhost.localdomain microshift[132400]: kubelet E0213 04:34:00.664106 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:34:02 localhost.localdomain microshift[132400]: kubelet I0213 04:34:02.665042 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:34:02 localhost.localdomain microshift[132400]: kubelet E0213 04:34:02.665677 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:34:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:03.287291 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:34:04 localhost.localdomain microshift[132400]: kubelet I0213 04:34:04.664200 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:34:04 localhost.localdomain microshift[132400]: kubelet E0213 04:34:04.664619 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:34:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:08.287065 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:34:08 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:34:08.869448 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:34:08 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:34:08.869629 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:34:11 localhost.localdomain microshift[132400]: kubelet I0213 04:34:11.664208 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:34:11 localhost.localdomain microshift[132400]: kubelet E0213 04:34:11.664806 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:34:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:13.286824 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:34:13 localhost.localdomain microshift[132400]: kubelet I0213 04:34:13.664235 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:34:13 localhost.localdomain microshift[132400]: kubelet E0213 04:34:13.664420 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:34:15 localhost.localdomain microshift[132400]: kubelet I0213 04:34:15.666025 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:34:15 localhost.localdomain microshift[132400]: kubelet E0213 04:34:15.666334 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:34:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:18.286999 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:34:18 localhost.localdomain microshift[132400]: kubelet I0213 04:34:18.664233 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:34:18 localhost.localdomain microshift[132400]: kubelet E0213 04:34:18.664465 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:34:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:23.287210 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:34:23 localhost.localdomain microshift[132400]: kubelet I0213 04:34:23.663841 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:34:23 localhost.localdomain microshift[132400]: kubelet E0213 04:34:23.664267 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:34:25 localhost.localdomain microshift[132400]: kubelet I0213 04:34:25.370407 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:34:25 localhost.localdomain microshift[132400]: kubelet E0213 04:34:25.370778 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:36:27.370767641 -0500 EST m=+1874.551113908 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:34:27 localhost.localdomain microshift[132400]: kubelet I0213 04:34:27.663973 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:34:27 localhost.localdomain microshift[132400]: kubelet E0213 04:34:27.664631 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:34:27 localhost.localdomain microshift[132400]: kubelet I0213 04:34:27.664751 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:34:27 localhost.localdomain microshift[132400]: kubelet E0213 04:34:27.665149 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:34:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:28.286862 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:34:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:33.286929 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:34:33 localhost.localdomain microshift[132400]: kubelet I0213 04:34:33.663870 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:34:33 localhost.localdomain microshift[132400]: kubelet E0213 04:34:33.664107 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:34:35 localhost.localdomain microshift[132400]: kubelet E0213 04:34:35.505489 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:34:35 localhost.localdomain microshift[132400]: kubelet E0213 04:34:35.505849 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:34:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:38.287216 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:34:38 localhost.localdomain microshift[132400]: kubelet I0213 04:34:38.663906 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:34:38 localhost.localdomain microshift[132400]: kubelet E0213 04:34:38.664088 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:34:38 localhost.localdomain microshift[132400]: kubelet I0213 04:34:38.664285 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:34:38 localhost.localdomain microshift[132400]: kubelet I0213 04:34:38.664577 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:34:38 localhost.localdomain microshift[132400]: kubelet E0213 04:34:38.664686 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:34:38 localhost.localdomain microshift[132400]: kubelet E0213 04:34:38.664865 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:34:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:43.287217 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:34:43 localhost.localdomain microshift[132400]: kubelet I0213 04:34:43.986187 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/dns/11.log" Feb 13 04:34:43 localhost.localdomain microshift[132400]: kubelet I0213 04:34:43.988633 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/kube-rbac-proxy/3.log" Feb 13 04:34:44 localhost.localdomain microshift[132400]: kubelet I0213 04:34:44.065426 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 04:34:44 localhost.localdomain microshift[132400]: kubelet I0213 04:34:44.129361 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-85d64c4987-bbdnr_41b0089d-73d0-450a-84f5-8bfec82d97f9/router/2.log" Feb 13 04:34:44 localhost.localdomain microshift[132400]: kubelet I0213 04:34:44.209006 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/nbdb/4.log" Feb 13 04:34:44 localhost.localdomain microshift[132400]: kubelet I0213 04:34:44.218985 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/sbdb/3.log" Feb 13 04:34:44 localhost.localdomain microshift[132400]: kubelet I0213 04:34:44.665198 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:34:44 localhost.localdomain microshift[132400]: kubelet E0213 04:34:44.665888 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:34:45 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:34:45.030255 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:34:45 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:34:45.030408 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:34:45 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:34:45.097482 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:34:45 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:34:45.097688 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:34:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:48.287027 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:34:50 localhost.localdomain microshift[132400]: kubelet I0213 04:34:50.664403 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:34:50 localhost.localdomain microshift[132400]: kubelet E0213 04:34:50.665109 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:34:51 localhost.localdomain microshift[132400]: kubelet I0213 04:34:51.664405 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:34:51 localhost.localdomain microshift[132400]: kubelet E0213 04:34:51.664972 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:34:51 localhost.localdomain microshift[132400]: kubelet I0213 04:34:51.665122 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:34:51 localhost.localdomain microshift[132400]: kubelet E0213 04:34:51.665566 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:34:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:53.286931 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:34:56 localhost.localdomain microshift[132400]: kubelet I0213 04:34:56.666089 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:34:56 localhost.localdomain microshift[132400]: kubelet E0213 04:34:56.666359 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:34:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:34:58.286419 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:01 localhost.localdomain microshift[132400]: kubelet I0213 04:35:01.663356 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:35:01 localhost.localdomain microshift[132400]: kubelet E0213 04:35:01.663543 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:35:02 localhost.localdomain microshift[132400]: kubelet I0213 04:35:02.664101 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:35:02 localhost.localdomain microshift[132400]: kubelet E0213 04:35:02.664374 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:35:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:03.286975 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:06 localhost.localdomain microshift[132400]: kubelet I0213 04:35:06.665929 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:35:06 localhost.localdomain microshift[132400]: kubelet E0213 04:35:06.666299 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:35:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:08.287125 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:10 localhost.localdomain microshift[132400]: kubelet I0213 04:35:10.663918 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:35:10 localhost.localdomain microshift[132400]: kubelet E0213 04:35:10.664394 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:35:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:13.286536 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:14 localhost.localdomain microshift[132400]: kubelet I0213 04:35:14.664288 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:35:14 localhost.localdomain microshift[132400]: kubelet E0213 04:35:14.665057 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:35:15 localhost.localdomain microshift[132400]: kubelet I0213 04:35:15.665107 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:35:15 localhost.localdomain microshift[132400]: kubelet E0213 04:35:15.665759 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:35:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:18.286950 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:20 localhost.localdomain microshift[132400]: kubelet I0213 04:35:20.663710 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:35:20 localhost.localdomain microshift[132400]: kubelet E0213 04:35:20.664588 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:35:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:23.286409 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:24 localhost.localdomain microshift[132400]: kubelet I0213 04:35:24.663640 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:35:24 localhost.localdomain microshift[132400]: kubelet E0213 04:35:24.663902 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:35:25 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:35:25.661829 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:35:25 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:35:25.661875 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:35:26 localhost.localdomain microshift[132400]: kubelet I0213 04:35:26.664093 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:35:26 localhost.localdomain microshift[132400]: kubelet E0213 04:35:26.664690 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:35:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:28.287116 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:30 localhost.localdomain microshift[132400]: kubelet I0213 04:35:30.664830 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:35:30 localhost.localdomain microshift[132400]: kubelet E0213 04:35:30.665016 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:35:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:33.286560 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:34 localhost.localdomain microshift[132400]: kubelet I0213 04:35:34.664257 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:35:34 localhost.localdomain microshift[132400]: kubelet E0213 04:35:34.665520 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:35:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:38.286953 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:38 localhost.localdomain microshift[132400]: kubelet I0213 04:35:38.663713 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:35:38 localhost.localdomain microshift[132400]: kubelet E0213 04:35:38.664176 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:35:38 localhost.localdomain microshift[132400]: kubelet I0213 04:35:38.664829 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:35:38 localhost.localdomain microshift[132400]: kubelet E0213 04:35:38.665144 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:35:38 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:35:38.978614 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:35:38 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:35:38.978643 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:35:42 localhost.localdomain microshift[132400]: kubelet I0213 04:35:42.664378 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:35:42 localhost.localdomain microshift[132400]: kubelet E0213 04:35:42.664560 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:35:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:43.286936 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:46 localhost.localdomain microshift[132400]: kubelet I0213 04:35:46.663669 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:35:46 localhost.localdomain microshift[132400]: kubelet E0213 04:35:46.663987 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:35:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:48.286649 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:52 localhost.localdomain microshift[132400]: kubelet I0213 04:35:52.663563 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:35:52 localhost.localdomain microshift[132400]: kubelet E0213 04:35:52.664162 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:35:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:53.286831 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:53 localhost.localdomain microshift[132400]: kubelet I0213 04:35:53.664438 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:35:53 localhost.localdomain microshift[132400]: kubelet E0213 04:35:53.665159 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:35:54 localhost.localdomain microshift[132400]: kubelet I0213 04:35:54.664641 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:35:54 localhost.localdomain microshift[132400]: kubelet E0213 04:35:54.664889 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:35:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:35:58.286854 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:35:59 localhost.localdomain microshift[132400]: kubelet I0213 04:35:59.663584 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:35:59 localhost.localdomain microshift[132400]: kubelet E0213 04:35:59.663918 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:36:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:03.286813 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:36:03 localhost.localdomain microshift[132400]: kubelet I0213 04:36:03.663856 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:36:03 localhost.localdomain microshift[132400]: kubelet E0213 04:36:03.664274 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:36:05 localhost.localdomain microshift[132400]: kubelet I0213 04:36:05.664069 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:36:05 localhost.localdomain microshift[132400]: kubelet E0213 04:36:05.664228 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:36:07 localhost.localdomain microshift[132400]: kubelet I0213 04:36:07.664324 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:36:07 localhost.localdomain microshift[132400]: kubelet E0213 04:36:07.664978 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:36:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:08.286549 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:36:12 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:36:12.558083 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:36:12 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:36:12.558111 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:36:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:13.286378 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:36:14 localhost.localdomain microshift[132400]: kubelet I0213 04:36:14.664552 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:36:14 localhost.localdomain microshift[132400]: kubelet E0213 04:36:14.665051 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:36:17 localhost.localdomain microshift[132400]: kubelet I0213 04:36:17.664241 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:36:17 localhost.localdomain microshift[132400]: kubelet E0213 04:36:17.664782 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:36:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:18.286358 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:36:18 localhost.localdomain microshift[132400]: kubelet I0213 04:36:18.664409 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:36:18 localhost.localdomain microshift[132400]: kubelet E0213 04:36:18.664809 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:36:18 localhost.localdomain microshift[132400]: kubelet I0213 04:36:18.665225 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:36:18 localhost.localdomain microshift[132400]: kubelet E0213 04:36:18.665409 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:36:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:23.286773 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:36:27 localhost.localdomain microshift[132400]: kubelet I0213 04:36:27.444030 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:36:27 localhost.localdomain microshift[132400]: kubelet E0213 04:36:27.444414 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:38:29.444401376 -0500 EST m=+1996.624747655 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:36:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:28.286948 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:36:28 localhost.localdomain microshift[132400]: kubelet I0213 04:36:28.663883 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:36:28 localhost.localdomain microshift[132400]: kubelet E0213 04:36:28.664994 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:36:28 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:36:28.733198 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:36:28 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:36:28.733350 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:36:29 localhost.localdomain microshift[132400]: kubelet I0213 04:36:29.663874 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:36:29 localhost.localdomain microshift[132400]: kubelet E0213 04:36:29.664425 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:36:32 localhost.localdomain microshift[132400]: kubelet I0213 04:36:32.663589 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:36:32 localhost.localdomain microshift[132400]: kubelet E0213 04:36:32.664230 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:36:32 localhost.localdomain microshift[132400]: kubelet I0213 04:36:32.664504 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:36:32 localhost.localdomain microshift[132400]: kubelet E0213 04:36:32.664718 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:36:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:33.287118 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:36:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:38.286907 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:36:38 localhost.localdomain microshift[132400]: kubelet E0213 04:36:38.701445 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:36:38 localhost.localdomain microshift[132400]: kubelet E0213 04:36:38.701688 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:36:42 localhost.localdomain microshift[132400]: kubelet I0213 04:36:42.665247 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:36:42 localhost.localdomain microshift[132400]: kubelet E0213 04:36:42.666022 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:36:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:43.286599 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:36:43 localhost.localdomain microshift[132400]: kubelet I0213 04:36:43.664302 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:36:43 localhost.localdomain microshift[132400]: kubelet E0213 04:36:43.664866 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:36:47 localhost.localdomain microshift[132400]: kubelet I0213 04:36:47.663993 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:36:47 localhost.localdomain microshift[132400]: kubelet I0213 04:36:47.664351 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:36:47 localhost.localdomain microshift[132400]: kubelet E0213 04:36:47.664570 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:36:47 localhost.localdomain microshift[132400]: kubelet E0213 04:36:47.664773 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:36:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:48.286830 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:36:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:53.286847 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:36:55 localhost.localdomain microshift[132400]: kubelet I0213 04:36:55.665447 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:36:55 localhost.localdomain microshift[132400]: kubelet E0213 04:36:55.665893 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:36:56 localhost.localdomain microshift[132400]: kubelet I0213 04:36:56.664995 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:36:56 localhost.localdomain microshift[132400]: kubelet E0213 04:36:56.666067 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:36:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:36:58.286869 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:00 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:37:00.132362 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:37:00 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:37:00.132678 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:37:00 localhost.localdomain microshift[132400]: kubelet I0213 04:37:00.664119 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:37:00 localhost.localdomain microshift[132400]: kubelet E0213 04:37:00.664518 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:37:01 localhost.localdomain microshift[132400]: kubelet I0213 04:37:01.663725 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:37:01 localhost.localdomain microshift[132400]: kubelet E0213 04:37:01.664327 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:37:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:03.286734 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:08.287050 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:08 localhost.localdomain microshift[132400]: kubelet I0213 04:37:08.664381 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:37:08 localhost.localdomain microshift[132400]: kubelet E0213 04:37:08.665177 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:37:10 localhost.localdomain microshift[132400]: kubelet I0213 04:37:10.663539 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:37:10 localhost.localdomain microshift[132400]: kubelet E0213 04:37:10.664236 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:37:11 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:37:11.073396 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:37:11 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:37:11.073423 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:37:12 localhost.localdomain microshift[132400]: kubelet I0213 04:37:12.665224 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:37:12 localhost.localdomain microshift[132400]: kubelet E0213 04:37:12.665830 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:37:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:13.287047 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:14 localhost.localdomain microshift[132400]: kubelet I0213 04:37:14.665415 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:37:14 localhost.localdomain microshift[132400]: kubelet E0213 04:37:14.667042 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:37:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:18.287220 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:19 localhost.localdomain microshift[132400]: kubelet I0213 04:37:19.663568 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:37:19 localhost.localdomain microshift[132400]: kubelet E0213 04:37:19.663921 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:37:22 localhost.localdomain microshift[132400]: kubelet I0213 04:37:22.663774 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:37:22 localhost.localdomain microshift[132400]: kubelet E0213 04:37:22.664057 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:37:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:23.287215 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:24 localhost.localdomain microshift[132400]: kubelet I0213 04:37:24.664680 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:37:24 localhost.localdomain microshift[132400]: kubelet I0213 04:37:24.966297 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7} Feb 13 04:37:26 localhost.localdomain microshift[132400]: kubelet I0213 04:37:26.664360 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:37:26 localhost.localdomain microshift[132400]: kubelet E0213 04:37:26.665164 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:37:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:28.286988 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:32 localhost.localdomain microshift[132400]: kubelet I0213 04:37:32.664116 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:37:32 localhost.localdomain microshift[132400]: kubelet E0213 04:37:32.664969 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:37:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:33.286477 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:36 localhost.localdomain microshift[132400]: kubelet I0213 04:37:36.664121 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:37:36 localhost.localdomain microshift[132400]: kubelet I0213 04:37:36.986122 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2} Feb 13 04:37:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:38.287026 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:39 localhost.localdomain microshift[132400]: kubelet I0213 04:37:39.664115 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:37:39 localhost.localdomain microshift[132400]: kubelet E0213 04:37:39.664685 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:37:39 localhost.localdomain microshift[132400]: kubelet I0213 04:37:39.992792 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" exitCode=1 Feb 13 04:37:39 localhost.localdomain microshift[132400]: kubelet I0213 04:37:39.993136 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2} Feb 13 04:37:39 localhost.localdomain microshift[132400]: kubelet I0213 04:37:39.993164 132400 scope.go:115] "RemoveContainer" containerID="9fdee8f5dbaf7f4a7d00b8f0615b6f93ddfb76f4db8ce1507cf8dbc2f30bc416" Feb 13 04:37:39 localhost.localdomain microshift[132400]: kubelet I0213 04:37:39.993562 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:37:39 localhost.localdomain microshift[132400]: kubelet E0213 04:37:39.993931 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:37:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:43.287137 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:46 localhost.localdomain microshift[132400]: kubelet I0213 04:37:46.666577 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:37:46 localhost.localdomain microshift[132400]: kubelet E0213 04:37:46.667136 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:37:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:48.286920 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:50 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:37:50.252443 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:37:50 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:37:50.252469 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:37:50 localhost.localdomain microshift[132400]: kubelet I0213 04:37:50.663931 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:37:50 localhost.localdomain microshift[132400]: kubelet E0213 04:37:50.664557 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:37:52 localhost.localdomain microshift[132400]: kubelet I0213 04:37:52.664129 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:37:52 localhost.localdomain microshift[132400]: kubelet E0213 04:37:52.664585 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:37:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:53.286812 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:54 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:37:54.776414 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:37:54 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:37:54.776773 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:37:57 localhost.localdomain microshift[132400]: kubelet I0213 04:37:57.664305 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:37:58 localhost.localdomain microshift[132400]: kubelet I0213 04:37:58.023562 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565} Feb 13 04:37:58 localhost.localdomain microshift[132400]: kubelet I0213 04:37:58.024384 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:37:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:37:58.286504 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:37:59 localhost.localdomain microshift[132400]: kubelet I0213 04:37:59.024587 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": dial tcp 10.42.0.6:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:37:59 localhost.localdomain microshift[132400]: kubelet I0213 04:37:59.024646 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": dial tcp 10.42.0.6:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:38:00 localhost.localdomain microshift[132400]: kubelet I0213 04:38:00.027331 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:38:00 localhost.localdomain microshift[132400]: kubelet I0213 04:38:00.027372 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:38:00 localhost.localdomain microshift[132400]: kubelet I0213 04:38:00.028667 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" exitCode=255 Feb 13 04:38:00 localhost.localdomain microshift[132400]: kubelet I0213 04:38:00.028691 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7} Feb 13 04:38:00 localhost.localdomain microshift[132400]: kubelet I0213 04:38:00.028711 132400 scope.go:115] "RemoveContainer" containerID="7976da7d9052a577f65dff0ec15ced58836d2eee9e4dc97a799a745c0939e29d" Feb 13 04:38:00 localhost.localdomain microshift[132400]: kubelet I0213 04:38:00.028914 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:38:00 localhost.localdomain microshift[132400]: kubelet E0213 04:38:00.029064 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:38:01 localhost.localdomain microshift[132400]: kubelet I0213 04:38:01.031936 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" exitCode=1 Feb 13 04:38:01 localhost.localdomain microshift[132400]: kubelet I0213 04:38:01.031965 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565} Feb 13 04:38:01 localhost.localdomain microshift[132400]: kubelet I0213 04:38:01.031990 132400 scope.go:115] "RemoveContainer" containerID="5c41be1fe8c025813a76ab77fac2c26bfd9e34dde523b3a4759866d6f5d98a45" Feb 13 04:38:01 localhost.localdomain microshift[132400]: kubelet I0213 04:38:01.032226 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:38:01 localhost.localdomain microshift[132400]: kubelet E0213 04:38:01.032527 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:38:02 localhost.localdomain microshift[132400]: kubelet I0213 04:38:02.663872 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:38:02 localhost.localdomain microshift[132400]: kubelet E0213 04:38:02.664606 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:38:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:03.286848 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:38:03 localhost.localdomain microshift[132400]: kubelet I0213 04:38:03.664489 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:38:03 localhost.localdomain microshift[132400]: kubelet E0213 04:38:03.664890 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:38:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:08.286939 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:38:11 localhost.localdomain microshift[132400]: kubelet I0213 04:38:11.663708 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:38:11 localhost.localdomain microshift[132400]: kubelet E0213 04:38:11.664242 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:38:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:13.287240 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:38:13 localhost.localdomain microshift[132400]: kubelet I0213 04:38:13.664235 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:38:13 localhost.localdomain microshift[132400]: kubelet E0213 04:38:13.664533 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:38:14 localhost.localdomain microshift[132400]: kubelet I0213 04:38:14.664318 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:38:14 localhost.localdomain microshift[132400]: kubelet E0213 04:38:14.664926 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:38:14 localhost.localdomain microshift[132400]: kubelet I0213 04:38:14.666707 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:38:14 localhost.localdomain microshift[132400]: kubelet E0213 04:38:14.667250 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:38:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:18.286899 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:38:20 localhost.localdomain microshift[132400]: kubelet I0213 04:38:20.901883 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:38:20 localhost.localdomain microshift[132400]: kubelet I0213 04:38:20.902328 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:38:20 localhost.localdomain microshift[132400]: kubelet E0213 04:38:20.902621 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:38:22 localhost.localdomain microshift[132400]: kubelet I0213 04:38:22.665067 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:38:22 localhost.localdomain microshift[132400]: kubelet E0213 04:38:22.665801 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:38:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:23.287011 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:38:26 localhost.localdomain microshift[132400]: kubelet I0213 04:38:26.193025 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:38:26 localhost.localdomain microshift[132400]: kubelet I0213 04:38:26.193767 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:38:26 localhost.localdomain microshift[132400]: kubelet E0213 04:38:26.194180 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:38:27 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:38:27.027724 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:38:27 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:38:27.027756 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:38:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:28.287194 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:38:28 localhost.localdomain microshift[132400]: kubelet I0213 04:38:28.664084 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:38:29 localhost.localdomain microshift[132400]: kubelet I0213 04:38:29.078375 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:1d63207eac88687b2ee899a0ad4daa0fdbddff76293cf120fd08bdab4011e9a5} Feb 13 04:38:29 localhost.localdomain microshift[132400]: kubelet I0213 04:38:29.079141 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:38:29 localhost.localdomain microshift[132400]: kubelet I0213 04:38:29.536155 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:38:29 localhost.localdomain microshift[132400]: kubelet E0213 04:38:29.536271 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:40:31.536261715 -0500 EST m=+2118.716607985 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:38:31 localhost.localdomain microshift[132400]: kubelet I0213 04:38:31.663780 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:38:31 localhost.localdomain microshift[132400]: kubelet E0213 04:38:31.664111 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:38:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:33.286905 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:38:37 localhost.localdomain microshift[132400]: kubelet I0213 04:38:37.664130 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:38:37 localhost.localdomain microshift[132400]: kubelet E0213 04:38:37.664323 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:38:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:38.286980 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:38:40 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:38:40.534552 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:38:40 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:38:40.534582 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:38:40 localhost.localdomain microshift[132400]: kubelet I0213 04:38:40.664172 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:38:40 localhost.localdomain microshift[132400]: kubelet E0213 04:38:40.664484 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:38:41 localhost.localdomain microshift[132400]: kubelet E0213 04:38:41.891155 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:38:41 localhost.localdomain microshift[132400]: kubelet E0213 04:38:41.891189 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:38:42 localhost.localdomain microshift[132400]: kubelet I0213 04:38:42.346597 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:38:42 localhost.localdomain microshift[132400]: kubelet I0213 04:38:42.346918 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:38:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:43.287108 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:38:45 localhost.localdomain microshift[132400]: kubelet I0213 04:38:45.347558 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:38:45 localhost.localdomain microshift[132400]: kubelet I0213 04:38:45.347603 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:38:45 localhost.localdomain microshift[132400]: kubelet I0213 04:38:45.668092 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:38:45 localhost.localdomain microshift[132400]: kubelet E0213 04:38:45.670572 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:38:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:48.286853 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:38:48 localhost.localdomain microshift[132400]: kubelet I0213 04:38:48.347813 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:38:48 localhost.localdomain microshift[132400]: kubelet I0213 04:38:48.348106 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:38:49 localhost.localdomain microshift[132400]: kubelet I0213 04:38:49.664058 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:38:49 localhost.localdomain microshift[132400]: kubelet E0213 04:38:49.664745 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:38:51 localhost.localdomain microshift[132400]: kubelet I0213 04:38:51.348625 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:38:51 localhost.localdomain microshift[132400]: kubelet I0213 04:38:51.348922 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:38:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:53.286942 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:38:53 localhost.localdomain microshift[132400]: kubelet I0213 04:38:53.663554 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:38:53 localhost.localdomain microshift[132400]: kubelet E0213 04:38:53.663948 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:38:54 localhost.localdomain microshift[132400]: kubelet I0213 04:38:54.349982 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:38:54 localhost.localdomain microshift[132400]: kubelet I0213 04:38:54.350364 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:38:56 localhost.localdomain microshift[132400]: kubelet I0213 04:38:56.664049 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:38:56 localhost.localdomain microshift[132400]: kubelet E0213 04:38:56.664648 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:38:57 localhost.localdomain microshift[132400]: kubelet I0213 04:38:57.351142 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:38:57 localhost.localdomain microshift[132400]: kubelet I0213 04:38:57.351374 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:38:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:38:58.286726 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:00 localhost.localdomain microshift[132400]: kubelet I0213 04:39:00.352569 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:00 localhost.localdomain microshift[132400]: kubelet I0213 04:39:00.353069 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:00 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:39:00.493958 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:39:00 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:39:00.493989 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:39:02 localhost.localdomain microshift[132400]: kubelet I0213 04:39:02.664709 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:39:02 localhost.localdomain microshift[132400]: kubelet E0213 04:39:02.664868 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:39:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:03.286868 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:03 localhost.localdomain microshift[132400]: kubelet I0213 04:39:03.353593 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:03 localhost.localdomain microshift[132400]: kubelet I0213 04:39:03.353648 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:06 localhost.localdomain microshift[132400]: kubelet I0213 04:39:06.353969 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:06 localhost.localdomain microshift[132400]: kubelet I0213 04:39:06.354027 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:08.286870 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:08 localhost.localdomain microshift[132400]: kubelet I0213 04:39:08.664002 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:39:08 localhost.localdomain microshift[132400]: kubelet E0213 04:39:08.664380 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:39:09 localhost.localdomain microshift[132400]: kubelet I0213 04:39:09.355132 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:09 localhost.localdomain microshift[132400]: kubelet I0213 04:39:09.355472 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:11 localhost.localdomain microshift[132400]: kubelet I0213 04:39:11.663501 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:39:11 localhost.localdomain microshift[132400]: kubelet E0213 04:39:11.663901 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:39:12 localhost.localdomain microshift[132400]: kubelet I0213 04:39:12.355715 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:12 localhost.localdomain microshift[132400]: kubelet I0213 04:39:12.355780 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:13.286390 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:15 localhost.localdomain microshift[132400]: kubelet I0213 04:39:15.356894 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:15 localhost.localdomain microshift[132400]: kubelet I0213 04:39:15.357199 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:15 localhost.localdomain microshift[132400]: kubelet I0213 04:39:15.670268 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:39:15 localhost.localdomain microshift[132400]: kubelet E0213 04:39:15.670448 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:39:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:18.286868 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:18 localhost.localdomain microshift[132400]: kubelet I0213 04:39:18.357975 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:18 localhost.localdomain microshift[132400]: kubelet I0213 04:39:18.358031 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:21 localhost.localdomain microshift[132400]: kubelet I0213 04:39:21.358760 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:21 localhost.localdomain microshift[132400]: kubelet I0213 04:39:21.358840 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:22 localhost.localdomain microshift[132400]: kubelet I0213 04:39:22.663365 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:39:22 localhost.localdomain microshift[132400]: kubelet E0213 04:39:22.663752 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:39:22 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:39:22.815400 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:39:22 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:39:22.815565 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:39:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:23.286486 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:24 localhost.localdomain microshift[132400]: kubelet I0213 04:39:24.359650 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:24 localhost.localdomain microshift[132400]: kubelet I0213 04:39:24.359741 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:24 localhost.localdomain microshift[132400]: kubelet I0213 04:39:24.663989 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:39:24 localhost.localdomain microshift[132400]: kubelet E0213 04:39:24.664472 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:39:27 localhost.localdomain microshift[132400]: kubelet I0213 04:39:27.360350 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:27 localhost.localdomain microshift[132400]: kubelet I0213 04:39:27.360400 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:28.286979 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:30 localhost.localdomain microshift[132400]: kubelet I0213 04:39:30.360720 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:30 localhost.localdomain microshift[132400]: kubelet I0213 04:39:30.360768 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:30 localhost.localdomain microshift[132400]: kubelet I0213 04:39:30.664959 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:39:30 localhost.localdomain microshift[132400]: kubelet E0213 04:39:30.665423 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:39:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:33.286976 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:33 localhost.localdomain microshift[132400]: kubelet I0213 04:39:33.361603 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:33 localhost.localdomain microshift[132400]: kubelet I0213 04:39:33.361826 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:34 localhost.localdomain microshift[132400]: kubelet I0213 04:39:34.631022 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:34 localhost.localdomain microshift[132400]: kubelet I0213 04:39:34.631382 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:35 localhost.localdomain microshift[132400]: kubelet I0213 04:39:35.669911 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:39:35 localhost.localdomain microshift[132400]: kubelet E0213 04:39:35.670542 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:39:36 localhost.localdomain microshift[132400]: kubelet I0213 04:39:36.362866 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:36 localhost.localdomain microshift[132400]: kubelet I0213 04:39:36.362934 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:37 localhost.localdomain microshift[132400]: kubelet I0213 04:39:37.663501 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:39:37 localhost.localdomain microshift[132400]: kubelet E0213 04:39:37.663852 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:39:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:38.287171 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:39 localhost.localdomain microshift[132400]: kubelet I0213 04:39:39.363746 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:39 localhost.localdomain microshift[132400]: kubelet I0213 04:39:39.364177 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:41 localhost.localdomain microshift[132400]: kubelet I0213 04:39:41.664338 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:39:41 localhost.localdomain microshift[132400]: kubelet E0213 04:39:41.665045 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:39:42 localhost.localdomain microshift[132400]: kubelet I0213 04:39:42.365207 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:42 localhost.localdomain microshift[132400]: kubelet I0213 04:39:42.365434 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:43.286193 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:44 localhost.localdomain microshift[132400]: kubelet I0213 04:39:44.631830 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:44 localhost.localdomain microshift[132400]: kubelet I0213 04:39:44.632149 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:45 localhost.localdomain microshift[132400]: kubelet I0213 04:39:45.365604 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:45 localhost.localdomain microshift[132400]: kubelet I0213 04:39:45.365846 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:46 localhost.localdomain microshift[132400]: kubelet I0213 04:39:46.665441 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:39:46 localhost.localdomain microshift[132400]: kubelet E0213 04:39:46.668225 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:39:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:48.286779 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:48 localhost.localdomain microshift[132400]: kubelet I0213 04:39:48.366971 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:48 localhost.localdomain microshift[132400]: kubelet I0213 04:39:48.367042 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:49 localhost.localdomain microshift[132400]: kubelet I0213 04:39:49.664046 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:39:49 localhost.localdomain microshift[132400]: kubelet E0213 04:39:49.664328 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:39:51 localhost.localdomain microshift[132400]: kubelet I0213 04:39:51.368045 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:51 localhost.localdomain microshift[132400]: kubelet I0213 04:39:51.368092 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:53.286757 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:54 localhost.localdomain microshift[132400]: kubelet I0213 04:39:54.368645 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:54 localhost.localdomain microshift[132400]: kubelet I0213 04:39:54.369066 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:54 localhost.localdomain microshift[132400]: kubelet I0213 04:39:54.631968 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:54 localhost.localdomain microshift[132400]: kubelet I0213 04:39:54.632047 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:54 localhost.localdomain microshift[132400]: kubelet I0213 04:39:54.664488 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:39:54 localhost.localdomain microshift[132400]: kubelet E0213 04:39:54.664823 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:39:57 localhost.localdomain microshift[132400]: kubelet I0213 04:39:57.370189 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:39:57 localhost.localdomain microshift[132400]: kubelet I0213 04:39:57.370245 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:39:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:39:58.286906 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:39:59 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:39:59.539567 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:39:59 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:39:59.539871 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:40:00 localhost.localdomain microshift[132400]: kubelet I0213 04:40:00.371104 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:00 localhost.localdomain microshift[132400]: kubelet I0213 04:40:00.371331 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:00 localhost.localdomain microshift[132400]: kubelet I0213 04:40:00.663958 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:40:00 localhost.localdomain microshift[132400]: kubelet E0213 04:40:00.664583 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:40:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:03.286569 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:03 localhost.localdomain microshift[132400]: kubelet I0213 04:40:03.372149 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:03 localhost.localdomain microshift[132400]: kubelet I0213 04:40:03.372440 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:03 localhost.localdomain microshift[132400]: kubelet I0213 04:40:03.663378 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:40:03 localhost.localdomain microshift[132400]: kubelet E0213 04:40:03.663878 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:40:04 localhost.localdomain microshift[132400]: kubelet I0213 04:40:04.632604 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:04 localhost.localdomain microshift[132400]: kubelet I0213 04:40:04.632968 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:05 localhost.localdomain microshift[132400]: kubelet I0213 04:40:05.672468 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:40:05 localhost.localdomain microshift[132400]: kubelet E0213 04:40:05.672666 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:40:06 localhost.localdomain microshift[132400]: kubelet I0213 04:40:06.373212 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:06 localhost.localdomain microshift[132400]: kubelet I0213 04:40:06.373261 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:08.287063 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:09 localhost.localdomain microshift[132400]: kubelet I0213 04:40:09.374178 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:09 localhost.localdomain microshift[132400]: kubelet I0213 04:40:09.374774 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:12 localhost.localdomain microshift[132400]: kubelet I0213 04:40:12.375718 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:12 localhost.localdomain microshift[132400]: kubelet I0213 04:40:12.376161 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:12 localhost.localdomain microshift[132400]: kubelet I0213 04:40:12.666330 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:40:12 localhost.localdomain microshift[132400]: kubelet E0213 04:40:12.667142 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:40:13 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:40:13.025824 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:40:13 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:40:13.025994 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:40:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:13.287204 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:14 localhost.localdomain microshift[132400]: kubelet I0213 04:40:14.632461 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:14 localhost.localdomain microshift[132400]: kubelet I0213 04:40:14.632525 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:14 localhost.localdomain microshift[132400]: kubelet I0213 04:40:14.632555 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:40:14 localhost.localdomain microshift[132400]: kubelet I0213 04:40:14.633073 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:1d63207eac88687b2ee899a0ad4daa0fdbddff76293cf120fd08bdab4011e9a5} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:40:14 localhost.localdomain microshift[132400]: kubelet I0213 04:40:14.633222 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://1d63207eac88687b2ee899a0ad4daa0fdbddff76293cf120fd08bdab4011e9a5" gracePeriod=30 Feb 13 04:40:14 localhost.localdomain microshift[132400]: kubelet I0213 04:40:14.664060 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:40:14 localhost.localdomain microshift[132400]: kubelet E0213 04:40:14.664342 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:40:15 localhost.localdomain microshift[132400]: kubelet I0213 04:40:15.377265 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:15 localhost.localdomain microshift[132400]: kubelet I0213 04:40:15.377576 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:18.286842 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:18 localhost.localdomain microshift[132400]: kubelet I0213 04:40:18.378063 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:18 localhost.localdomain microshift[132400]: kubelet I0213 04:40:18.378266 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:18 localhost.localdomain microshift[132400]: kubelet I0213 04:40:18.664201 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:40:18 localhost.localdomain microshift[132400]: kubelet E0213 04:40:18.665219 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:40:21 localhost.localdomain microshift[132400]: kubelet I0213 04:40:21.378692 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:21 localhost.localdomain microshift[132400]: kubelet I0213 04:40:21.378737 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:23.286964 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:24 localhost.localdomain microshift[132400]: kubelet I0213 04:40:24.379881 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:24 localhost.localdomain microshift[132400]: kubelet I0213 04:40:24.380304 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:25 localhost.localdomain microshift[132400]: kubelet I0213 04:40:25.664043 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:40:25 localhost.localdomain microshift[132400]: kubelet E0213 04:40:25.664347 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:40:27 localhost.localdomain microshift[132400]: kubelet I0213 04:40:27.381495 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:27 localhost.localdomain microshift[132400]: kubelet I0213 04:40:27.381973 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:27 localhost.localdomain microshift[132400]: kubelet I0213 04:40:27.664331 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:40:27 localhost.localdomain microshift[132400]: kubelet E0213 04:40:27.665083 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:40:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:28.287129 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:29 localhost.localdomain microshift[132400]: kubelet I0213 04:40:29.663837 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:40:29 localhost.localdomain microshift[132400]: kubelet E0213 04:40:29.664101 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:40:30 localhost.localdomain microshift[132400]: kubelet I0213 04:40:30.382471 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:30 localhost.localdomain microshift[132400]: kubelet I0213 04:40:30.382528 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:31 localhost.localdomain microshift[132400]: kubelet I0213 04:40:31.587042 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:40:31 localhost.localdomain microshift[132400]: kubelet E0213 04:40:31.587727 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:42:33.587708224 -0500 EST m=+2240.768054508 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:40:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:33.286391 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:33 localhost.localdomain microshift[132400]: kubelet I0213 04:40:33.383424 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:33 localhost.localdomain microshift[132400]: kubelet I0213 04:40:33.383485 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:35 localhost.localdomain microshift[132400]: kubelet I0213 04:40:35.280286 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="1d63207eac88687b2ee899a0ad4daa0fdbddff76293cf120fd08bdab4011e9a5" exitCode=0 Feb 13 04:40:35 localhost.localdomain microshift[132400]: kubelet I0213 04:40:35.280333 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:1d63207eac88687b2ee899a0ad4daa0fdbddff76293cf120fd08bdab4011e9a5} Feb 13 04:40:35 localhost.localdomain microshift[132400]: kubelet I0213 04:40:35.280363 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273} Feb 13 04:40:35 localhost.localdomain microshift[132400]: kubelet I0213 04:40:35.280383 132400 scope.go:115] "RemoveContainer" containerID="cb8c5be598d88d413b3de51a7ed31466ac8bf1c2361ea88ad051a06e6fc1191f" Feb 13 04:40:36 localhost.localdomain microshift[132400]: kubelet I0213 04:40:36.283316 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 04:40:36 localhost.localdomain microshift[132400]: kubelet I0213 04:40:36.383810 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:36 localhost.localdomain microshift[132400]: kubelet I0213 04:40:36.383861 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:36 localhost.localdomain microshift[132400]: kubelet I0213 04:40:36.383898 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:40:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:38.286238 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:39 localhost.localdomain microshift[132400]: kubelet I0213 04:40:39.664407 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:40:39 localhost.localdomain microshift[132400]: kubelet E0213 04:40:39.665095 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:40:40 localhost.localdomain microshift[132400]: kubelet I0213 04:40:40.664610 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:40:40 localhost.localdomain microshift[132400]: kubelet E0213 04:40:40.665248 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:40:41 localhost.localdomain microshift[132400]: kubelet I0213 04:40:41.664148 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:40:41 localhost.localdomain microshift[132400]: kubelet E0213 04:40:41.664321 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:40:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:43.286937 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:44 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:40:44.134218 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:40:44 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:40:44.134439 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:40:45 localhost.localdomain microshift[132400]: kubelet E0213 04:40:45.098798 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:40:45 localhost.localdomain microshift[132400]: kubelet E0213 04:40:45.099159 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:40:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:48.286459 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:48 localhost.localdomain microshift[132400]: kubelet I0213 04:40:48.347742 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:48 localhost.localdomain microshift[132400]: kubelet I0213 04:40:48.347797 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:51 localhost.localdomain microshift[132400]: kubelet I0213 04:40:51.348108 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:51 localhost.localdomain microshift[132400]: kubelet I0213 04:40:51.348168 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:51 localhost.localdomain microshift[132400]: kubelet I0213 04:40:51.664264 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:40:51 localhost.localdomain microshift[132400]: kubelet E0213 04:40:51.664540 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:40:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:53.286265 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:53 localhost.localdomain microshift[132400]: kubelet I0213 04:40:53.663855 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:40:53 localhost.localdomain microshift[132400]: kubelet E0213 04:40:53.664105 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:40:53 localhost.localdomain microshift[132400]: kubelet I0213 04:40:53.664359 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:40:53 localhost.localdomain microshift[132400]: kubelet E0213 04:40:53.664893 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:40:54 localhost.localdomain microshift[132400]: kubelet I0213 04:40:54.349087 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:54 localhost.localdomain microshift[132400]: kubelet I0213 04:40:54.349140 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:57 localhost.localdomain microshift[132400]: kubelet I0213 04:40:57.349479 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:40:57 localhost.localdomain microshift[132400]: kubelet I0213 04:40:57.349520 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:40:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:40:58.286409 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:40:58 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:40:58.638590 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:40:58 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:40:58.638932 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:41:00 localhost.localdomain microshift[132400]: kubelet I0213 04:41:00.349620 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:00 localhost.localdomain microshift[132400]: kubelet I0213 04:41:00.349961 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:02 localhost.localdomain microshift[132400]: kubelet I0213 04:41:02.664241 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:41:02 localhost.localdomain microshift[132400]: kubelet E0213 04:41:02.664724 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:41:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:03.287043 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:03 localhost.localdomain microshift[132400]: kubelet I0213 04:41:03.350257 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:03 localhost.localdomain microshift[132400]: kubelet I0213 04:41:03.350527 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:04 localhost.localdomain microshift[132400]: kubelet I0213 04:41:04.665135 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:41:04 localhost.localdomain microshift[132400]: kubelet E0213 04:41:04.665596 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:41:06 localhost.localdomain microshift[132400]: kubelet I0213 04:41:06.352737 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:06 localhost.localdomain microshift[132400]: kubelet I0213 04:41:06.352785 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:07 localhost.localdomain microshift[132400]: kubelet I0213 04:41:07.663957 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:41:07 localhost.localdomain microshift[132400]: kubelet E0213 04:41:07.664304 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:41:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:08.286904 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:09 localhost.localdomain microshift[132400]: kubelet I0213 04:41:09.353448 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:09 localhost.localdomain microshift[132400]: kubelet I0213 04:41:09.353935 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:12 localhost.localdomain microshift[132400]: kubelet I0213 04:41:12.354135 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:12 localhost.localdomain microshift[132400]: kubelet I0213 04:41:12.354466 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:13.287078 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:14 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:41:14.727752 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:41:14 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:41:14.728169 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:41:15 localhost.localdomain microshift[132400]: kubelet I0213 04:41:15.354857 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:15 localhost.localdomain microshift[132400]: kubelet I0213 04:41:15.354901 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:15 localhost.localdomain microshift[132400]: kubelet I0213 04:41:15.665483 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:41:15 localhost.localdomain microshift[132400]: kubelet E0213 04:41:15.665800 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:41:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:18.287087 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:18 localhost.localdomain microshift[132400]: kubelet I0213 04:41:18.355945 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:18 localhost.localdomain microshift[132400]: kubelet I0213 04:41:18.355995 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:19 localhost.localdomain microshift[132400]: kubelet I0213 04:41:19.664039 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:41:19 localhost.localdomain microshift[132400]: kubelet E0213 04:41:19.664283 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:41:20 localhost.localdomain microshift[132400]: kubelet I0213 04:41:20.664868 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:41:20 localhost.localdomain microshift[132400]: kubelet E0213 04:41:20.666461 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:41:21 localhost.localdomain microshift[132400]: kubelet I0213 04:41:21.356102 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:21 localhost.localdomain microshift[132400]: kubelet I0213 04:41:21.356150 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:23.287055 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:24 localhost.localdomain microshift[132400]: kubelet I0213 04:41:24.356718 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:24 localhost.localdomain microshift[132400]: kubelet I0213 04:41:24.357268 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:26 localhost.localdomain microshift[132400]: kubelet I0213 04:41:26.666086 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:41:26 localhost.localdomain microshift[132400]: kubelet E0213 04:41:26.666372 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:41:27 localhost.localdomain microshift[132400]: kubelet I0213 04:41:27.358367 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:27 localhost.localdomain microshift[132400]: kubelet I0213 04:41:27.358563 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:28.286620 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:30 localhost.localdomain microshift[132400]: kubelet I0213 04:41:30.359056 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:30 localhost.localdomain microshift[132400]: kubelet I0213 04:41:30.359380 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:30 localhost.localdomain microshift[132400]: kubelet I0213 04:41:30.663910 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:41:30 localhost.localdomain microshift[132400]: kubelet E0213 04:41:30.664501 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:41:31 localhost.localdomain microshift[132400]: kubelet I0213 04:41:31.663741 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:41:31 localhost.localdomain microshift[132400]: kubelet E0213 04:41:31.664102 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:41:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:33.286278 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:33 localhost.localdomain microshift[132400]: kubelet I0213 04:41:33.359859 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:33 localhost.localdomain microshift[132400]: kubelet I0213 04:41:33.359913 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:36 localhost.localdomain microshift[132400]: kubelet I0213 04:41:36.360040 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:36 localhost.localdomain microshift[132400]: kubelet I0213 04:41:36.360237 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:38.287029 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:39 localhost.localdomain microshift[132400]: kubelet I0213 04:41:39.360867 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:39 localhost.localdomain microshift[132400]: kubelet I0213 04:41:39.360919 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:40 localhost.localdomain microshift[132400]: kubelet I0213 04:41:40.664028 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:41:40 localhost.localdomain microshift[132400]: kubelet E0213 04:41:40.664870 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:41:42 localhost.localdomain microshift[132400]: kubelet I0213 04:41:42.361220 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:42 localhost.localdomain microshift[132400]: kubelet I0213 04:41:42.361574 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:43.286935 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:44 localhost.localdomain microshift[132400]: kubelet I0213 04:41:44.632478 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:44 localhost.localdomain microshift[132400]: kubelet I0213 04:41:44.632531 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:45 localhost.localdomain microshift[132400]: kubelet I0213 04:41:45.361948 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:45 localhost.localdomain microshift[132400]: kubelet I0213 04:41:45.362000 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:45 localhost.localdomain microshift[132400]: kubelet I0213 04:41:45.664288 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:41:45 localhost.localdomain microshift[132400]: kubelet E0213 04:41:45.668752 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:41:46 localhost.localdomain microshift[132400]: kubelet I0213 04:41:46.667055 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:41:46 localhost.localdomain microshift[132400]: kubelet E0213 04:41:46.667414 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:41:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:48.286246 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:48 localhost.localdomain microshift[132400]: kubelet I0213 04:41:48.363034 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:48 localhost.localdomain microshift[132400]: kubelet I0213 04:41:48.363097 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:51 localhost.localdomain microshift[132400]: kubelet I0213 04:41:51.363882 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:51 localhost.localdomain microshift[132400]: kubelet I0213 04:41:51.363929 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:53.286401 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:54 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:41:54.247076 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:41:54 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:41:54.247284 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:41:54 localhost.localdomain microshift[132400]: kubelet I0213 04:41:54.364855 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:54 localhost.localdomain microshift[132400]: kubelet I0213 04:41:54.365171 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:54 localhost.localdomain microshift[132400]: kubelet I0213 04:41:54.632958 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:54 localhost.localdomain microshift[132400]: kubelet I0213 04:41:54.633176 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:55 localhost.localdomain microshift[132400]: kubelet I0213 04:41:55.663678 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:41:55 localhost.localdomain microshift[132400]: kubelet E0213 04:41:55.664065 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:41:57 localhost.localdomain microshift[132400]: kubelet I0213 04:41:57.366028 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:41:57 localhost.localdomain microshift[132400]: kubelet I0213 04:41:57.366379 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:41:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:41:58.286794 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:41:59 localhost.localdomain microshift[132400]: kubelet I0213 04:41:59.664196 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:41:59 localhost.localdomain microshift[132400]: kubelet E0213 04:41:59.664505 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:42:00 localhost.localdomain microshift[132400]: kubelet I0213 04:42:00.366668 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:00 localhost.localdomain microshift[132400]: kubelet I0213 04:42:00.366718 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:00 localhost.localdomain microshift[132400]: kubelet I0213 04:42:00.664288 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:42:00 localhost.localdomain microshift[132400]: kubelet E0213 04:42:00.664460 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:42:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:03.286920 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:42:03 localhost.localdomain microshift[132400]: kubelet I0213 04:42:03.367030 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:03 localhost.localdomain microshift[132400]: kubelet I0213 04:42:03.367239 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:04 localhost.localdomain microshift[132400]: kubelet I0213 04:42:04.632708 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:04 localhost.localdomain microshift[132400]: kubelet I0213 04:42:04.633025 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:06 localhost.localdomain microshift[132400]: kubelet I0213 04:42:06.367356 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:06 localhost.localdomain microshift[132400]: kubelet I0213 04:42:06.367396 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:08.286856 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:42:09 localhost.localdomain microshift[132400]: kubelet I0213 04:42:09.367566 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:09 localhost.localdomain microshift[132400]: kubelet I0213 04:42:09.367625 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:10 localhost.localdomain microshift[132400]: kubelet I0213 04:42:10.665413 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:42:10 localhost.localdomain microshift[132400]: kubelet E0213 04:42:10.666221 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:42:10 localhost.localdomain microshift[132400]: kubelet I0213 04:42:10.666965 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:42:10 localhost.localdomain microshift[132400]: kubelet E0213 04:42:10.667498 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:42:11 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:42:11.610238 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:42:11 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:42:11.610275 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:42:11 localhost.localdomain microshift[132400]: kubelet I0213 04:42:11.664028 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:42:11 localhost.localdomain microshift[132400]: kubelet E0213 04:42:11.664373 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:42:12 localhost.localdomain microshift[132400]: kubelet I0213 04:42:12.368643 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:12 localhost.localdomain microshift[132400]: kubelet I0213 04:42:12.368803 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:13.286894 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:42:14 localhost.localdomain microshift[132400]: kubelet I0213 04:42:14.631842 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:14 localhost.localdomain microshift[132400]: kubelet I0213 04:42:14.631893 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:15 localhost.localdomain microshift[132400]: kubelet I0213 04:42:15.369593 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:15 localhost.localdomain microshift[132400]: kubelet I0213 04:42:15.369878 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:18.286332 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:42:18 localhost.localdomain microshift[132400]: kubelet I0213 04:42:18.370233 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:18 localhost.localdomain microshift[132400]: kubelet I0213 04:42:18.370289 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:21 localhost.localdomain microshift[132400]: kubelet I0213 04:42:21.371300 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:21 localhost.localdomain microshift[132400]: kubelet I0213 04:42:21.371351 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:22 localhost.localdomain microshift[132400]: kubelet I0213 04:42:22.664054 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:42:22 localhost.localdomain microshift[132400]: kubelet E0213 04:42:22.664334 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:42:22 localhost.localdomain microshift[132400]: kubelet I0213 04:42:22.665708 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:42:22 localhost.localdomain microshift[132400]: kubelet E0213 04:42:22.666021 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:42:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:23.287099 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:42:23 localhost.localdomain microshift[132400]: kubelet I0213 04:42:23.663478 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:42:23 localhost.localdomain microshift[132400]: kubelet E0213 04:42:23.663698 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:42:24 localhost.localdomain microshift[132400]: kubelet I0213 04:42:24.371816 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:24 localhost.localdomain microshift[132400]: kubelet I0213 04:42:24.372170 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:24 localhost.localdomain microshift[132400]: kubelet I0213 04:42:24.632445 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:24 localhost.localdomain microshift[132400]: kubelet I0213 04:42:24.632720 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:24 localhost.localdomain microshift[132400]: kubelet I0213 04:42:24.632785 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:42:24 localhost.localdomain microshift[132400]: kubelet I0213 04:42:24.633142 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:42:24 localhost.localdomain microshift[132400]: kubelet I0213 04:42:24.633279 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" gracePeriod=30 Feb 13 04:42:27 localhost.localdomain microshift[132400]: kubelet I0213 04:42:27.373125 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:27 localhost.localdomain microshift[132400]: kubelet I0213 04:42:27.373193 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:28.287168 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:42:30 localhost.localdomain microshift[132400]: kubelet I0213 04:42:30.373896 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:30 localhost.localdomain microshift[132400]: kubelet I0213 04:42:30.374371 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:33.286839 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:42:33 localhost.localdomain microshift[132400]: kubelet I0213 04:42:33.374875 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:33 localhost.localdomain microshift[132400]: kubelet I0213 04:42:33.375046 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:33 localhost.localdomain microshift[132400]: kubelet I0213 04:42:33.622013 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:42:33 localhost.localdomain microshift[132400]: kubelet E0213 04:42:33.622275 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:44:35.622264345 -0500 EST m=+2362.802610616 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:42:35 localhost.localdomain microshift[132400]: kubelet I0213 04:42:35.664423 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:42:35 localhost.localdomain microshift[132400]: kubelet E0213 04:42:35.677081 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:42:35 localhost.localdomain microshift[132400]: kubelet I0213 04:42:35.677262 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:42:35 localhost.localdomain microshift[132400]: kubelet E0213 04:42:35.677846 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:42:35 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:42:35.762013 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:42:35 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:42:35.762239 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:42:36 localhost.localdomain microshift[132400]: kubelet I0213 04:42:36.375462 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:36 localhost.localdomain microshift[132400]: kubelet I0213 04:42:36.375536 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:36 localhost.localdomain microshift[132400]: kubelet I0213 04:42:36.665098 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:42:36 localhost.localdomain microshift[132400]: kubelet E0213 04:42:36.667424 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:42:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:38.286735 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:42:39 localhost.localdomain microshift[132400]: kubelet I0213 04:42:39.376711 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:39 localhost.localdomain microshift[132400]: kubelet I0213 04:42:39.376787 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:42 localhost.localdomain microshift[132400]: kubelet I0213 04:42:42.377186 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:42 localhost.localdomain microshift[132400]: kubelet I0213 04:42:42.377265 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:43.287241 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:42:44 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:42:44.501466 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:42:44 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:42:44.501488 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:42:44 localhost.localdomain microshift[132400]: kubelet E0213 04:42:44.753859 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:42:45 localhost.localdomain microshift[132400]: kubelet I0213 04:42:45.378114 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:45 localhost.localdomain microshift[132400]: kubelet I0213 04:42:45.378179 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:45 localhost.localdomain microshift[132400]: kubelet I0213 04:42:45.486472 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" exitCode=0 Feb 13 04:42:45 localhost.localdomain microshift[132400]: kubelet I0213 04:42:45.486512 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273} Feb 13 04:42:45 localhost.localdomain microshift[132400]: kubelet I0213 04:42:45.486536 132400 scope.go:115] "RemoveContainer" containerID="1d63207eac88687b2ee899a0ad4daa0fdbddff76293cf120fd08bdab4011e9a5" Feb 13 04:42:45 localhost.localdomain microshift[132400]: kubelet I0213 04:42:45.486806 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:42:45 localhost.localdomain microshift[132400]: kubelet E0213 04:42:45.487172 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:42:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:48.286899 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:42:48 localhost.localdomain microshift[132400]: kubelet E0213 04:42:48.298717 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:42:48 localhost.localdomain microshift[132400]: kubelet E0213 04:42:48.298751 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:42:48 localhost.localdomain microshift[132400]: kubelet I0213 04:42:48.379191 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:42:48 localhost.localdomain microshift[132400]: kubelet I0213 04:42:48.379261 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:42:48 localhost.localdomain microshift[132400]: kubelet I0213 04:42:48.664075 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:42:48 localhost.localdomain microshift[132400]: kubelet I0213 04:42:48.665556 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:42:48 localhost.localdomain microshift[132400]: kubelet E0213 04:42:48.666861 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:42:49 localhost.localdomain microshift[132400]: kubelet I0213 04:42:49.498466 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1} Feb 13 04:42:50 localhost.localdomain microshift[132400]: kubelet I0213 04:42:50.664299 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:42:50 localhost.localdomain microshift[132400]: kubelet E0213 04:42:50.665170 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:42:52 localhost.localdomain microshift[132400]: kubelet I0213 04:42:52.521693 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1} Feb 13 04:42:52 localhost.localdomain microshift[132400]: kubelet I0213 04:42:52.521741 132400 scope.go:115] "RemoveContainer" containerID="6df6ba614dcb3a5fcccae718eb155924b35383d2bbdb4b9c3c0b275091c35bf2" Feb 13 04:42:52 localhost.localdomain microshift[132400]: kubelet I0213 04:42:52.522165 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:42:52 localhost.localdomain microshift[132400]: kubelet E0213 04:42:52.522500 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:42:52 localhost.localdomain microshift[132400]: kubelet I0213 04:42:52.522701 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" exitCode=1 Feb 13 04:42:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:53.286730 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:42:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:42:58.287203 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:00 localhost.localdomain microshift[132400]: kubelet I0213 04:43:00.663981 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:43:00 localhost.localdomain microshift[132400]: kubelet E0213 04:43:00.664935 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:43:01 localhost.localdomain microshift[132400]: kubelet I0213 04:43:01.664265 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:43:02 localhost.localdomain microshift[132400]: kubelet I0213 04:43:02.540928 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2} Feb 13 04:43:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:03.287224 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:04 localhost.localdomain microshift[132400]: kubelet I0213 04:43:04.664208 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:43:04 localhost.localdomain microshift[132400]: kubelet E0213 04:43:04.664893 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:43:05 localhost.localdomain microshift[132400]: kubelet I0213 04:43:05.665236 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:43:06 localhost.localdomain microshift[132400]: kubelet I0213 04:43:06.550746 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f} Feb 13 04:43:06 localhost.localdomain microshift[132400]: kubelet I0213 04:43:06.551844 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:43:07 localhost.localdomain microshift[132400]: kubelet I0213 04:43:07.552472 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:43:07 localhost.localdomain microshift[132400]: kubelet I0213 04:43:07.552517 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:43:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:08.287104 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:08 localhost.localdomain microshift[132400]: kubelet I0213 04:43:08.552942 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:43:08 localhost.localdomain microshift[132400]: kubelet I0213 04:43:08.552977 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:43:09 localhost.localdomain microshift[132400]: kubelet I0213 04:43:09.553940 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:43:09 localhost.localdomain microshift[132400]: kubelet I0213 04:43:09.553987 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:43:09 localhost.localdomain microshift[132400]: kubelet I0213 04:43:09.556167 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" exitCode=1 Feb 13 04:43:09 localhost.localdomain microshift[132400]: kubelet I0213 04:43:09.556268 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f} Feb 13 04:43:09 localhost.localdomain microshift[132400]: kubelet I0213 04:43:09.556313 132400 scope.go:115] "RemoveContainer" containerID="ad89704489a19edb6812241312aeb0676aae0d01386cfecf59558e54d6caa565" Feb 13 04:43:09 localhost.localdomain microshift[132400]: kubelet I0213 04:43:09.556594 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:43:09 localhost.localdomain microshift[132400]: kubelet E0213 04:43:09.556958 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:43:11 localhost.localdomain microshift[132400]: kubelet I0213 04:43:11.663649 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:43:11 localhost.localdomain microshift[132400]: kubelet E0213 04:43:11.664337 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:43:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:13.286789 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:15 localhost.localdomain microshift[132400]: kubelet I0213 04:43:15.663491 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:43:15 localhost.localdomain microshift[132400]: kubelet E0213 04:43:15.663811 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:43:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:18.286992 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:19 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:43:19.181453 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:43:19 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:43:19.181483 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:43:19 localhost.localdomain microshift[132400]: kubelet I0213 04:43:19.663952 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:43:19 localhost.localdomain microshift[132400]: kubelet E0213 04:43:19.664965 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:43:20 localhost.localdomain microshift[132400]: kubelet I0213 04:43:20.902047 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:43:20 localhost.localdomain microshift[132400]: kubelet I0213 04:43:20.902775 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:43:20 localhost.localdomain microshift[132400]: kubelet E0213 04:43:20.903150 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:43:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:23.286824 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:26 localhost.localdomain microshift[132400]: kubelet I0213 04:43:26.192457 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:43:26 localhost.localdomain microshift[132400]: kubelet I0213 04:43:26.193058 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:43:26 localhost.localdomain microshift[132400]: kubelet E0213 04:43:26.193430 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:43:26 localhost.localdomain microshift[132400]: kubelet I0213 04:43:26.666325 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:43:26 localhost.localdomain microshift[132400]: kubelet E0213 04:43:26.666718 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:43:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:28.286252 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:33.286818 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:34 localhost.localdomain microshift[132400]: kubelet I0213 04:43:34.665350 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:43:34 localhost.localdomain microshift[132400]: kubelet E0213 04:43:34.665641 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:43:36 localhost.localdomain microshift[132400]: kubelet I0213 04:43:36.600440 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" exitCode=255 Feb 13 04:43:36 localhost.localdomain microshift[132400]: kubelet I0213 04:43:36.600796 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2} Feb 13 04:43:36 localhost.localdomain microshift[132400]: kubelet I0213 04:43:36.600879 132400 scope.go:115] "RemoveContainer" containerID="a7a4efa62d9e1cbd9d637ea3cd174b72bc6b8c15eca95f3c97c993e87106b1e7" Feb 13 04:43:36 localhost.localdomain microshift[132400]: kubelet I0213 04:43:36.601108 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:43:36 localhost.localdomain microshift[132400]: kubelet E0213 04:43:36.601293 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:43:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:38.287255 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:39 localhost.localdomain microshift[132400]: kubelet I0213 04:43:39.663408 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:43:39 localhost.localdomain microshift[132400]: kubelet E0213 04:43:39.663785 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:43:40 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:43:40.390393 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:43:40 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:43:40.390425 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:43:41 localhost.localdomain microshift[132400]: kubelet I0213 04:43:41.663960 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:43:41 localhost.localdomain microshift[132400]: kubelet E0213 04:43:41.664267 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:43:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:43.286650 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:46 localhost.localdomain microshift[132400]: kubelet I0213 04:43:46.664209 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:43:46 localhost.localdomain microshift[132400]: kubelet E0213 04:43:46.664871 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:43:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:48.286706 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:50 localhost.localdomain microshift[132400]: kubelet I0213 04:43:50.664026 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:43:50 localhost.localdomain microshift[132400]: kubelet E0213 04:43:50.664249 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:43:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:53.286521 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:54 localhost.localdomain microshift[132400]: kubelet I0213 04:43:54.664616 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:43:54 localhost.localdomain microshift[132400]: kubelet E0213 04:43:54.665550 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:43:56 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:43:56.058154 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:43:56 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:43:56.058462 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:43:56 localhost.localdomain microshift[132400]: kubelet I0213 04:43:56.663969 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:43:56 localhost.localdomain microshift[132400]: kubelet E0213 04:43:56.664273 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:43:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:43:58.286303 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:43:58 localhost.localdomain microshift[132400]: kubelet I0213 04:43:58.665338 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:43:58 localhost.localdomain microshift[132400]: kubelet E0213 04:43:58.665826 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:44:02 localhost.localdomain microshift[132400]: kubelet I0213 04:44:02.664061 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:44:02 localhost.localdomain microshift[132400]: kubelet E0213 04:44:02.664277 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:44:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:03.286585 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:07 localhost.localdomain microshift[132400]: kubelet I0213 04:44:07.663829 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:44:07 localhost.localdomain microshift[132400]: kubelet E0213 04:44:07.664755 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:44:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:08.286852 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:08 localhost.localdomain microshift[132400]: kubelet I0213 04:44:08.664307 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:44:08 localhost.localdomain microshift[132400]: kubelet E0213 04:44:08.664940 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:44:10 localhost.localdomain microshift[132400]: kubelet I0213 04:44:10.665288 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:44:10 localhost.localdomain microshift[132400]: kubelet E0213 04:44:10.665629 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:44:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:13.286828 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:17 localhost.localdomain microshift[132400]: kubelet I0213 04:44:17.663295 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:44:17 localhost.localdomain microshift[132400]: kubelet E0213 04:44:17.663499 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:44:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:18.287360 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:20 localhost.localdomain microshift[132400]: kubelet I0213 04:44:20.664024 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:44:20 localhost.localdomain microshift[132400]: kubelet E0213 04:44:20.664283 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:44:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:23.286652 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:23 localhost.localdomain microshift[132400]: kubelet I0213 04:44:23.663716 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:44:23 localhost.localdomain microshift[132400]: kubelet E0213 04:44:23.664127 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:44:25 localhost.localdomain microshift[132400]: kubelet I0213 04:44:25.669055 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:44:25 localhost.localdomain microshift[132400]: kubelet E0213 04:44:25.669318 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:44:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:28.287069 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:28 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:44:28.481874 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:44:28 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:44:28.481901 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:44:29 localhost.localdomain microshift[132400]: kubelet I0213 04:44:29.663491 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:44:29 localhost.localdomain microshift[132400]: kubelet E0213 04:44:29.664053 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:44:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:33.287003 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:33 localhost.localdomain microshift[132400]: kubelet I0213 04:44:33.664151 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:44:33 localhost.localdomain microshift[132400]: kubelet E0213 04:44:33.664612 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:44:35 localhost.localdomain microshift[132400]: kubelet I0213 04:44:35.644604 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:44:35 localhost.localdomain microshift[132400]: kubelet E0213 04:44:35.644821 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:46:37.644805287 -0500 EST m=+2484.825151573 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:44:35 localhost.localdomain microshift[132400]: kubelet I0213 04:44:35.664477 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:44:35 localhost.localdomain microshift[132400]: kubelet E0213 04:44:35.668953 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:44:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:38.286891 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:38 localhost.localdomain microshift[132400]: kubelet I0213 04:44:38.665327 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:44:38 localhost.localdomain microshift[132400]: kubelet E0213 04:44:38.665653 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:44:41 localhost.localdomain microshift[132400]: kubelet I0213 04:44:41.664235 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:44:41 localhost.localdomain microshift[132400]: kubelet E0213 04:44:41.664417 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:44:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:43.286546 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:47 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:44:47.016058 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:44:47 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:44:47.016106 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:44:47 localhost.localdomain microshift[132400]: kubelet I0213 04:44:47.664168 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:44:47 localhost.localdomain microshift[132400]: kubelet E0213 04:44:47.664503 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:44:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:48.286866 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:48 localhost.localdomain microshift[132400]: kubelet I0213 04:44:48.664880 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:44:48 localhost.localdomain microshift[132400]: kubelet E0213 04:44:48.665460 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:44:51 localhost.localdomain microshift[132400]: kubelet E0213 04:44:51.493576 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:44:51 localhost.localdomain microshift[132400]: kubelet E0213 04:44:51.493610 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:44:51 localhost.localdomain microshift[132400]: kubelet I0213 04:44:51.663407 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:44:51 localhost.localdomain microshift[132400]: kubelet E0213 04:44:51.663765 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:44:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:53.286919 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:55 localhost.localdomain microshift[132400]: kubelet I0213 04:44:55.663759 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:44:55 localhost.localdomain microshift[132400]: kubelet E0213 04:44:55.664205 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:44:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:44:58.286569 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:44:59 localhost.localdomain microshift[132400]: kubelet I0213 04:44:59.663575 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:44:59 localhost.localdomain microshift[132400]: kubelet I0213 04:44:59.663937 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:44:59 localhost.localdomain microshift[132400]: kubelet E0213 04:44:59.664226 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:44:59 localhost.localdomain microshift[132400]: kubelet E0213 04:44:59.664241 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:45:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:03.286835 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:45:04 localhost.localdomain microshift[132400]: kubelet I0213 04:45:04.664381 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:45:04 localhost.localdomain microshift[132400]: kubelet E0213 04:45:04.665084 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:45:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:08.286506 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:45:09 localhost.localdomain microshift[132400]: kubelet I0213 04:45:09.663750 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:45:09 localhost.localdomain microshift[132400]: kubelet E0213 04:45:09.664344 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:45:12 localhost.localdomain microshift[132400]: kubelet I0213 04:45:12.663460 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:45:12 localhost.localdomain microshift[132400]: kubelet E0213 04:45:12.663736 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:45:12 localhost.localdomain microshift[132400]: kubelet I0213 04:45:12.664099 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:45:12 localhost.localdomain microshift[132400]: kubelet E0213 04:45:12.664559 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:45:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:13.286939 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:45:18 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:45:18.137849 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:45:18 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:45:18.137873 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:45:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:18.287163 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:45:19 localhost.localdomain microshift[132400]: kubelet I0213 04:45:19.663460 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:45:19 localhost.localdomain microshift[132400]: kubelet E0213 04:45:19.663770 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:45:20 localhost.localdomain microshift[132400]: kubelet I0213 04:45:20.664207 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:45:20 localhost.localdomain microshift[132400]: kubelet E0213 04:45:20.664737 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:45:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:23.286616 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:45:23 localhost.localdomain microshift[132400]: kubelet I0213 04:45:23.663800 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:45:23 localhost.localdomain microshift[132400]: kubelet E0213 04:45:23.664093 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:45:26 localhost.localdomain microshift[132400]: kubelet I0213 04:45:26.666169 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:45:26 localhost.localdomain microshift[132400]: kubelet E0213 04:45:26.666678 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:45:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:28.286501 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:45:30 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:45:30.882840 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:45:30 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:45:30.882865 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:45:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:33.286820 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:45:33 localhost.localdomain microshift[132400]: kubelet I0213 04:45:33.663755 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:45:33 localhost.localdomain microshift[132400]: kubelet I0213 04:45:33.664348 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:45:33 localhost.localdomain microshift[132400]: kubelet E0213 04:45:33.664477 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:45:33 localhost.localdomain microshift[132400]: kubelet E0213 04:45:33.665010 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:45:37 localhost.localdomain microshift[132400]: kubelet I0213 04:45:37.664083 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:45:37 localhost.localdomain microshift[132400]: kubelet E0213 04:45:37.664455 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:45:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:38.287136 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:45:39 localhost.localdomain microshift[132400]: kubelet I0213 04:45:39.663979 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:45:39 localhost.localdomain microshift[132400]: kubelet E0213 04:45:39.664628 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:45:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:43.287061 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:45:44 localhost.localdomain microshift[132400]: kubelet I0213 04:45:44.664985 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:45:44 localhost.localdomain microshift[132400]: kubelet E0213 04:45:44.665358 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:45:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:48.286637 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:45:48 localhost.localdomain microshift[132400]: kubelet I0213 04:45:48.664174 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:45:48 localhost.localdomain microshift[132400]: kubelet E0213 04:45:48.664696 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:45:48 localhost.localdomain microshift[132400]: kubelet I0213 04:45:48.665446 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:45:48 localhost.localdomain microshift[132400]: kubelet E0213 04:45:48.665796 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:45:49 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:45:49.715649 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:45:49 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:45:49.716021 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:45:50 localhost.localdomain microshift[132400]: kubelet I0213 04:45:50.663940 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:45:50 localhost.localdomain microshift[132400]: kubelet E0213 04:45:50.664300 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:45:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:53.286936 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:45:57 localhost.localdomain microshift[132400]: kubelet I0213 04:45:57.664150 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:45:57 localhost.localdomain microshift[132400]: kubelet E0213 04:45:57.665036 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:45:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:45:58.286728 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:00 localhost.localdomain microshift[132400]: kubelet I0213 04:46:00.663453 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:46:00 localhost.localdomain microshift[132400]: kubelet E0213 04:46:00.663822 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:46:02 localhost.localdomain microshift[132400]: kubelet I0213 04:46:02.663902 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:46:02 localhost.localdomain microshift[132400]: kubelet E0213 04:46:02.664069 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:46:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:03.286969 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:03 localhost.localdomain microshift[132400]: kubelet I0213 04:46:03.663838 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:46:03 localhost.localdomain microshift[132400]: kubelet E0213 04:46:03.664184 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:46:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:08.286752 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:10 localhost.localdomain microshift[132400]: kubelet I0213 04:46:10.664192 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:46:10 localhost.localdomain microshift[132400]: kubelet E0213 04:46:10.664543 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:46:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:13.287418 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:14 localhost.localdomain microshift[132400]: kubelet I0213 04:46:14.664430 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:46:14 localhost.localdomain microshift[132400]: kubelet E0213 04:46:14.665530 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:46:15 localhost.localdomain microshift[132400]: kubelet I0213 04:46:15.666936 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:46:15 localhost.localdomain microshift[132400]: kubelet E0213 04:46:15.667977 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:46:17 localhost.localdomain microshift[132400]: kubelet I0213 04:46:17.663607 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:46:17 localhost.localdomain microshift[132400]: kubelet E0213 04:46:17.664127 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:46:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:18.286480 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:23.286829 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:23 localhost.localdomain microshift[132400]: kubelet I0213 04:46:23.664415 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:46:23 localhost.localdomain microshift[132400]: kubelet E0213 04:46:23.665158 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:46:26 localhost.localdomain microshift[132400]: kubelet I0213 04:46:26.667137 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:46:26 localhost.localdomain microshift[132400]: kubelet E0213 04:46:26.667650 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:46:28 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:46:28.197548 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:46:28 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:46:28.197581 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:46:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:28.286298 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:29 localhost.localdomain microshift[132400]: kubelet I0213 04:46:29.663727 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:46:29 localhost.localdomain microshift[132400]: kubelet E0213 04:46:29.663969 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:46:30 localhost.localdomain microshift[132400]: kubelet I0213 04:46:30.664267 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:46:30 localhost.localdomain microshift[132400]: kubelet E0213 04:46:30.664504 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:46:33 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:46:33.021648 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:46:33 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:46:33.021686 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:46:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:33.286476 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:36 localhost.localdomain microshift[132400]: kubelet I0213 04:46:36.664192 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:46:36 localhost.localdomain microshift[132400]: kubelet E0213 04:46:36.667961 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:46:37 localhost.localdomain microshift[132400]: kubelet I0213 04:46:37.686898 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:46:37 localhost.localdomain microshift[132400]: kubelet E0213 04:46:37.687007 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:48:39.686997797 -0500 EST m=+2606.867344074 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:46:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:38.286953 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:39 localhost.localdomain microshift[132400]: kubelet I0213 04:46:39.663808 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:46:39 localhost.localdomain microshift[132400]: kubelet E0213 04:46:39.664437 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:46:40 localhost.localdomain microshift[132400]: kubelet I0213 04:46:40.664393 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:46:40 localhost.localdomain microshift[132400]: kubelet E0213 04:46:40.665310 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:46:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:43.286828 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:43 localhost.localdomain microshift[132400]: kubelet I0213 04:46:43.663687 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:46:43 localhost.localdomain microshift[132400]: kubelet E0213 04:46:43.664118 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:46:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:48.286530 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:49 localhost.localdomain microshift[132400]: kubelet I0213 04:46:49.664188 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:46:49 localhost.localdomain microshift[132400]: kubelet E0213 04:46:49.664499 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:46:50 localhost.localdomain microshift[132400]: kubelet I0213 04:46:50.663992 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:46:50 localhost.localdomain microshift[132400]: kubelet E0213 04:46:50.664546 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:46:52 localhost.localdomain microshift[132400]: kubelet I0213 04:46:52.663917 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:46:52 localhost.localdomain microshift[132400]: kubelet E0213 04:46:52.664732 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:46:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:53.287116 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:46:54 localhost.localdomain microshift[132400]: kubelet E0213 04:46:54.730759 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[kube-api-access-5gtpr default-certificate service-ca-bundle]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:46:54 localhost.localdomain microshift[132400]: kubelet E0213 04:46:54.730810 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[kube-api-access-5gtpr default-certificate service-ca-bundle]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:46:56 localhost.localdomain microshift[132400]: kubelet I0213 04:46:56.667351 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:46:56 localhost.localdomain microshift[132400]: kubelet E0213 04:46:56.668071 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:46:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:46:58.286737 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:03.287072 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:03 localhost.localdomain microshift[132400]: kubelet I0213 04:47:03.663924 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:47:03 localhost.localdomain microshift[132400]: kubelet E0213 04:47:03.664364 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:47:05 localhost.localdomain microshift[132400]: kubelet I0213 04:47:05.670064 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:47:05 localhost.localdomain microshift[132400]: kubelet I0213 04:47:05.670094 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:47:05 localhost.localdomain microshift[132400]: kubelet E0213 04:47:05.670299 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:47:05 localhost.localdomain microshift[132400]: kubelet E0213 04:47:05.670331 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:47:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:08.287379 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:10 localhost.localdomain microshift[132400]: kubelet I0213 04:47:10.664123 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:47:10 localhost.localdomain microshift[132400]: kubelet E0213 04:47:10.664317 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:47:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:13.286757 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:14 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:47:14.287471 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:47:14 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:47:14.288027 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:47:16 localhost.localdomain microshift[132400]: kubelet I0213 04:47:16.664014 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:47:16 localhost.localdomain microshift[132400]: kubelet E0213 04:47:16.666324 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:47:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:18.286437 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:18 localhost.localdomain microshift[132400]: kubelet I0213 04:47:18.664212 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:47:18 localhost.localdomain microshift[132400]: kubelet E0213 04:47:18.664553 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:47:18 localhost.localdomain microshift[132400]: kubelet I0213 04:47:18.665040 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:47:18 localhost.localdomain microshift[132400]: kubelet E0213 04:47:18.665468 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:47:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:23.286778 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:24 localhost.localdomain microshift[132400]: kubelet I0213 04:47:24.664540 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:47:24 localhost.localdomain microshift[132400]: kubelet E0213 04:47:24.665240 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:47:27 localhost.localdomain microshift[132400]: kubelet I0213 04:47:27.663743 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:47:27 localhost.localdomain microshift[132400]: kubelet E0213 04:47:27.664014 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:47:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:28.286600 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:29 localhost.localdomain microshift[132400]: kubelet I0213 04:47:29.663520 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:47:29 localhost.localdomain microshift[132400]: kubelet E0213 04:47:29.664110 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:47:30 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:47:30.654250 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:47:30 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:47:30.654408 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:47:32 localhost.localdomain microshift[132400]: kubelet I0213 04:47:32.663956 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:47:32 localhost.localdomain microshift[132400]: kubelet E0213 04:47:32.664459 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:47:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:33.286872 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:36 localhost.localdomain microshift[132400]: kubelet I0213 04:47:36.664101 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:47:36 localhost.localdomain microshift[132400]: kubelet E0213 04:47:36.664337 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:47:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:38.286929 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:39 localhost.localdomain microshift[132400]: kubelet I0213 04:47:39.663721 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:47:39 localhost.localdomain microshift[132400]: kubelet E0213 04:47:39.664306 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:47:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:43.286243 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:43 localhost.localdomain microshift[132400]: kubelet I0213 04:47:43.664319 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:47:43 localhost.localdomain microshift[132400]: kubelet E0213 04:47:43.664708 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:47:46 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:47:46.476290 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:47:46 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:47:46.476616 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:47:46 localhost.localdomain microshift[132400]: kubelet I0213 04:47:46.663961 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:47:47 localhost.localdomain microshift[132400]: kubelet I0213 04:47:47.003182 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:456bf9cd6f2af0c7204018baf9d7c9d836016bc9f0b37a99b65537fe971465f4} Feb 13 04:47:47 localhost.localdomain microshift[132400]: kubelet I0213 04:47:47.003737 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:47:47 localhost.localdomain microshift[132400]: kubelet I0213 04:47:47.664089 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:47:47 localhost.localdomain microshift[132400]: kubelet E0213 04:47:47.664304 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:47:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:48.286327 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:53.286350 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:47:53 localhost.localdomain microshift[132400]: kubelet I0213 04:47:53.664422 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:47:54 localhost.localdomain microshift[132400]: kubelet I0213 04:47:54.015803 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17} Feb 13 04:47:56 localhost.localdomain microshift[132400]: kubelet I0213 04:47:56.666276 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:47:56 localhost.localdomain microshift[132400]: kubelet E0213 04:47:56.666884 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:47:57 localhost.localdomain microshift[132400]: kubelet I0213 04:47:57.021759 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" exitCode=1 Feb 13 04:47:57 localhost.localdomain microshift[132400]: kubelet I0213 04:47:57.021791 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17} Feb 13 04:47:57 localhost.localdomain microshift[132400]: kubelet I0213 04:47:57.021814 132400 scope.go:115] "RemoveContainer" containerID="99124f2b1e29ff51b79067825354074216862c38c0ce94bd0847af587ed171e1" Feb 13 04:47:57 localhost.localdomain microshift[132400]: kubelet I0213 04:47:57.022157 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:47:57 localhost.localdomain microshift[132400]: kubelet E0213 04:47:57.022445 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:47:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:47:58.286396 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:00 localhost.localdomain microshift[132400]: kubelet I0213 04:48:00.346854 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:00 localhost.localdomain microshift[132400]: kubelet I0213 04:48:00.346902 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:01 localhost.localdomain microshift[132400]: kubelet I0213 04:48:01.663732 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:48:01 localhost.localdomain microshift[132400]: kubelet E0213 04:48:01.663917 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:48:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:03.286916 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:03 localhost.localdomain microshift[132400]: kubelet I0213 04:48:03.347146 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:03 localhost.localdomain microshift[132400]: kubelet I0213 04:48:03.347400 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:06 localhost.localdomain microshift[132400]: kubelet I0213 04:48:06.348547 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:06 localhost.localdomain microshift[132400]: kubelet I0213 04:48:06.348595 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:07 localhost.localdomain microshift[132400]: kubelet I0213 04:48:07.663838 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:48:07 localhost.localdomain microshift[132400]: kubelet E0213 04:48:07.664534 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:48:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:08.286685 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:09 localhost.localdomain microshift[132400]: kubelet I0213 04:48:09.349610 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:09 localhost.localdomain microshift[132400]: kubelet I0213 04:48:09.350122 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:10 localhost.localdomain microshift[132400]: kubelet I0213 04:48:10.665477 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:48:10 localhost.localdomain microshift[132400]: kubelet E0213 04:48:10.665893 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:48:12 localhost.localdomain microshift[132400]: kubelet I0213 04:48:12.350503 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:12 localhost.localdomain microshift[132400]: kubelet I0213 04:48:12.351550 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:13.287133 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:13 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:48:13.963755 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:48:13 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:48:13.964045 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:48:15 localhost.localdomain microshift[132400]: kubelet I0213 04:48:15.352647 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:15 localhost.localdomain microshift[132400]: kubelet I0213 04:48:15.353000 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:15 localhost.localdomain microshift[132400]: kubelet I0213 04:48:15.667163 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:48:15 localhost.localdomain microshift[132400]: kubelet E0213 04:48:15.667341 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:48:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:18.286892 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:18 localhost.localdomain microshift[132400]: kubelet I0213 04:48:18.353699 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:18 localhost.localdomain microshift[132400]: kubelet I0213 04:48:18.353963 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:20 localhost.localdomain microshift[132400]: kubelet I0213 04:48:20.665022 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:48:20 localhost.localdomain microshift[132400]: kubelet I0213 04:48:20.901914 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:48:20 localhost.localdomain microshift[132400]: kubelet I0213 04:48:20.902286 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:48:20 localhost.localdomain microshift[132400]: kubelet E0213 04:48:20.902592 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:48:21 localhost.localdomain microshift[132400]: kubelet I0213 04:48:21.060243 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf} Feb 13 04:48:21 localhost.localdomain microshift[132400]: kubelet I0213 04:48:21.060469 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:48:21 localhost.localdomain microshift[132400]: kubelet I0213 04:48:21.355152 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:21 localhost.localdomain microshift[132400]: kubelet I0213 04:48:21.355202 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:22 localhost.localdomain microshift[132400]: kubelet I0213 04:48:22.060811 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:22 localhost.localdomain microshift[132400]: kubelet I0213 04:48:22.061163 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:23 localhost.localdomain microshift[132400]: kubelet I0213 04:48:23.062488 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:23 localhost.localdomain microshift[132400]: kubelet I0213 04:48:23.062543 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:23.286683 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:24 localhost.localdomain microshift[132400]: kubelet I0213 04:48:24.067291 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" exitCode=1 Feb 13 04:48:24 localhost.localdomain microshift[132400]: kubelet I0213 04:48:24.067334 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf} Feb 13 04:48:24 localhost.localdomain microshift[132400]: kubelet I0213 04:48:24.067363 132400 scope.go:115] "RemoveContainer" containerID="83f46c15cf84d493f5f65a052a32a9b8d6ca6e0315a774ea3a0bd66d8539be4f" Feb 13 04:48:24 localhost.localdomain microshift[132400]: kubelet I0213 04:48:24.067684 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:48:24 localhost.localdomain microshift[132400]: kubelet E0213 04:48:24.067969 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:48:24 localhost.localdomain microshift[132400]: kubelet I0213 04:48:24.356321 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:24 localhost.localdomain microshift[132400]: kubelet I0213 04:48:24.356497 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:26 localhost.localdomain microshift[132400]: kubelet I0213 04:48:26.192967 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:48:26 localhost.localdomain microshift[132400]: kubelet I0213 04:48:26.193955 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:48:26 localhost.localdomain microshift[132400]: kubelet E0213 04:48:26.195021 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:48:27 localhost.localdomain microshift[132400]: kubelet I0213 04:48:27.356998 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:27 localhost.localdomain microshift[132400]: kubelet I0213 04:48:27.357328 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:28.286315 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:30 localhost.localdomain microshift[132400]: kubelet I0213 04:48:30.358457 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:30 localhost.localdomain microshift[132400]: kubelet I0213 04:48:30.358513 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:30 localhost.localdomain microshift[132400]: kubelet I0213 04:48:30.663895 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:48:30 localhost.localdomain microshift[132400]: kubelet E0213 04:48:30.664407 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:48:32 localhost.localdomain microshift[132400]: kubelet I0213 04:48:32.664069 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:48:32 localhost.localdomain microshift[132400]: kubelet E0213 04:48:32.664684 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:48:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:33.286388 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:33 localhost.localdomain microshift[132400]: kubelet I0213 04:48:33.359394 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:33 localhost.localdomain microshift[132400]: kubelet I0213 04:48:33.359611 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:36 localhost.localdomain microshift[132400]: kubelet I0213 04:48:36.360646 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:36 localhost.localdomain microshift[132400]: kubelet I0213 04:48:36.360704 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:38.286745 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:39 localhost.localdomain microshift[132400]: kubelet I0213 04:48:39.360843 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:39 localhost.localdomain microshift[132400]: kubelet I0213 04:48:39.360895 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:39 localhost.localdomain microshift[132400]: kubelet I0213 04:48:39.726860 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:48:39 localhost.localdomain microshift[132400]: kubelet E0213 04:48:39.726961 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:50:41.726951795 -0500 EST m=+2728.907298063 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:48:40 localhost.localdomain microshift[132400]: kubelet I0213 04:48:40.663928 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:48:40 localhost.localdomain microshift[132400]: kubelet E0213 04:48:40.664224 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:48:42 localhost.localdomain microshift[132400]: kubelet I0213 04:48:42.361576 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:42 localhost.localdomain microshift[132400]: kubelet I0213 04:48:42.362003 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:43.286482 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:43 localhost.localdomain microshift[132400]: kubelet I0213 04:48:43.664208 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:48:44 localhost.localdomain microshift[132400]: kubelet I0213 04:48:44.099879 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca} Feb 13 04:48:44 localhost.localdomain microshift[132400]: kubelet I0213 04:48:44.664818 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:48:44 localhost.localdomain microshift[132400]: kubelet E0213 04:48:44.665445 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:48:45 localhost.localdomain microshift[132400]: kubelet I0213 04:48:45.362488 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:45 localhost.localdomain microshift[132400]: kubelet I0213 04:48:45.362740 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:45 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:48:45.549441 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:48:45 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:48:45.549475 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:48:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:48.286941 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:48 localhost.localdomain microshift[132400]: kubelet I0213 04:48:48.363172 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:48 localhost.localdomain microshift[132400]: kubelet I0213 04:48:48.363216 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:51 localhost.localdomain microshift[132400]: kubelet I0213 04:48:51.363354 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:51 localhost.localdomain microshift[132400]: kubelet I0213 04:48:51.363395 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:53.286694 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:48:53 localhost.localdomain microshift[132400]: kubelet I0213 04:48:53.664205 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:48:53 localhost.localdomain microshift[132400]: kubelet E0213 04:48:53.664743 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:48:54 localhost.localdomain microshift[132400]: kubelet I0213 04:48:54.363950 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:54 localhost.localdomain microshift[132400]: kubelet I0213 04:48:54.364373 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:54 localhost.localdomain microshift[132400]: kubelet I0213 04:48:54.631766 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:54 localhost.localdomain microshift[132400]: kubelet I0213 04:48:54.632040 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:57 localhost.localdomain microshift[132400]: kubelet I0213 04:48:57.365123 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:48:57 localhost.localdomain microshift[132400]: kubelet I0213 04:48:57.365177 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:48:57 localhost.localdomain microshift[132400]: kubelet I0213 04:48:57.663864 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:48:57 localhost.localdomain microshift[132400]: kubelet E0213 04:48:57.664402 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:48:57 localhost.localdomain microshift[132400]: kubelet E0213 04:48:57.926806 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:48:57 localhost.localdomain microshift[132400]: kubelet E0213 04:48:57.927036 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:48:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:48:58.287188 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:00 localhost.localdomain microshift[132400]: kubelet I0213 04:49:00.366245 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:00 localhost.localdomain microshift[132400]: kubelet I0213 04:49:00.366619 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:03.286497 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:03 localhost.localdomain microshift[132400]: kubelet I0213 04:49:03.367737 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:03 localhost.localdomain microshift[132400]: kubelet I0213 04:49:03.367796 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:04 localhost.localdomain microshift[132400]: kubelet I0213 04:49:04.632247 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:04 localhost.localdomain microshift[132400]: kubelet I0213 04:49:04.632313 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:06 localhost.localdomain microshift[132400]: kubelet I0213 04:49:06.368444 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:06 localhost.localdomain microshift[132400]: kubelet I0213 04:49:06.368487 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:07 localhost.localdomain microshift[132400]: kubelet I0213 04:49:07.663871 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:49:07 localhost.localdomain microshift[132400]: kubelet E0213 04:49:07.664531 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:49:08 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:49:08.095605 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:49:08 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:49:08.095793 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:49:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:08.287152 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:09 localhost.localdomain microshift[132400]: kubelet I0213 04:49:09.369396 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:09 localhost.localdomain microshift[132400]: kubelet I0213 04:49:09.369450 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:10 localhost.localdomain microshift[132400]: kubelet I0213 04:49:10.664113 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:49:10 localhost.localdomain microshift[132400]: kubelet E0213 04:49:10.664665 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:49:12 localhost.localdomain microshift[132400]: kubelet I0213 04:49:12.370444 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:12 localhost.localdomain microshift[132400]: kubelet I0213 04:49:12.370491 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:13.287039 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:14 localhost.localdomain microshift[132400]: kubelet I0213 04:49:14.631328 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:14 localhost.localdomain microshift[132400]: kubelet I0213 04:49:14.631752 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:15 localhost.localdomain microshift[132400]: kubelet I0213 04:49:15.371168 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:15 localhost.localdomain microshift[132400]: kubelet I0213 04:49:15.371212 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:18 localhost.localdomain microshift[132400]: kubelet I0213 04:49:18.149325 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" exitCode=255 Feb 13 04:49:18 localhost.localdomain microshift[132400]: kubelet I0213 04:49:18.149353 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca} Feb 13 04:49:18 localhost.localdomain microshift[132400]: kubelet I0213 04:49:18.149573 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:49:18 localhost.localdomain microshift[132400]: kubelet E0213 04:49:18.149735 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:49:18 localhost.localdomain microshift[132400]: kubelet I0213 04:49:18.149800 132400 scope.go:115] "RemoveContainer" containerID="50badd315ceeefe718aec11e22a639f3566d56e68650dad5266f7b3c52883ef2" Feb 13 04:49:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:18.286491 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:18 localhost.localdomain microshift[132400]: kubelet I0213 04:49:18.371650 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:18 localhost.localdomain microshift[132400]: kubelet I0213 04:49:18.371861 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:19 localhost.localdomain microshift[132400]: kubelet I0213 04:49:19.664141 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:49:19 localhost.localdomain microshift[132400]: kubelet E0213 04:49:19.664473 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:49:21 localhost.localdomain microshift[132400]: kubelet I0213 04:49:21.372933 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:21 localhost.localdomain microshift[132400]: kubelet I0213 04:49:21.373128 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:23.286770 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:24 localhost.localdomain microshift[132400]: kubelet I0213 04:49:24.373518 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:24 localhost.localdomain microshift[132400]: kubelet I0213 04:49:24.373561 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:24 localhost.localdomain microshift[132400]: kubelet I0213 04:49:24.632734 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:24 localhost.localdomain microshift[132400]: kubelet I0213 04:49:24.632793 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:25 localhost.localdomain microshift[132400]: kubelet I0213 04:49:25.671072 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:49:25 localhost.localdomain microshift[132400]: kubelet E0213 04:49:25.671459 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:49:27 localhost.localdomain microshift[132400]: kubelet I0213 04:49:27.374684 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:27 localhost.localdomain microshift[132400]: kubelet I0213 04:49:27.374733 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:28.286723 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:30 localhost.localdomain microshift[132400]: kubelet I0213 04:49:30.375747 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:30 localhost.localdomain microshift[132400]: kubelet I0213 04:49:30.376093 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:31 localhost.localdomain microshift[132400]: kubelet I0213 04:49:31.663246 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:49:31 localhost.localdomain microshift[132400]: kubelet E0213 04:49:31.663894 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:49:32 localhost.localdomain microshift[132400]: kubelet I0213 04:49:32.664175 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:49:32 localhost.localdomain microshift[132400]: kubelet E0213 04:49:32.665067 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:49:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:33.286988 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:33 localhost.localdomain microshift[132400]: kubelet I0213 04:49:33.377211 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:33 localhost.localdomain microshift[132400]: kubelet I0213 04:49:33.377418 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:34 localhost.localdomain microshift[132400]: kubelet I0213 04:49:34.631440 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:34 localhost.localdomain microshift[132400]: kubelet I0213 04:49:34.631502 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:34 localhost.localdomain microshift[132400]: kubelet I0213 04:49:34.631537 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:49:34 localhost.localdomain microshift[132400]: kubelet I0213 04:49:34.632118 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:456bf9cd6f2af0c7204018baf9d7c9d836016bc9f0b37a99b65537fe971465f4} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:49:34 localhost.localdomain microshift[132400]: kubelet I0213 04:49:34.632208 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://456bf9cd6f2af0c7204018baf9d7c9d836016bc9f0b37a99b65537fe971465f4" gracePeriod=30 Feb 13 04:49:36 localhost.localdomain microshift[132400]: kubelet I0213 04:49:36.378718 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:36 localhost.localdomain microshift[132400]: kubelet I0213 04:49:36.378776 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:38.286415 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:38 localhost.localdomain microshift[132400]: kubelet I0213 04:49:38.664760 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:49:38 localhost.localdomain microshift[132400]: kubelet E0213 04:49:38.665003 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:49:38 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:49:38.973119 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:49:38 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:49:38.973155 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:49:39 localhost.localdomain microshift[132400]: kubelet I0213 04:49:39.379900 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:39 localhost.localdomain microshift[132400]: kubelet I0213 04:49:39.380411 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:42 localhost.localdomain microshift[132400]: kubelet I0213 04:49:42.381074 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:42 localhost.localdomain microshift[132400]: kubelet I0213 04:49:42.381485 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:43.286700 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:43 localhost.localdomain microshift[132400]: kubelet I0213 04:49:43.663686 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:49:43 localhost.localdomain microshift[132400]: kubelet E0213 04:49:43.664309 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:49:44 localhost.localdomain microshift[132400]: kubelet I0213 04:49:44.665146 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:49:44 localhost.localdomain microshift[132400]: kubelet E0213 04:49:44.665367 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:49:45 localhost.localdomain microshift[132400]: kubelet I0213 04:49:45.382227 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:45 localhost.localdomain microshift[132400]: kubelet I0213 04:49:45.382289 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:48.286333 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:48 localhost.localdomain microshift[132400]: kubelet I0213 04:49:48.382629 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:48 localhost.localdomain microshift[132400]: kubelet I0213 04:49:48.382772 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:50 localhost.localdomain microshift[132400]: kubelet I0213 04:49:50.664212 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:49:50 localhost.localdomain microshift[132400]: kubelet E0213 04:49:50.665272 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:49:51 localhost.localdomain microshift[132400]: kubelet I0213 04:49:51.383468 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:51 localhost.localdomain microshift[132400]: kubelet I0213 04:49:51.383521 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:53.286233 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:54 localhost.localdomain microshift[132400]: kubelet I0213 04:49:54.383817 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:54 localhost.localdomain microshift[132400]: kubelet I0213 04:49:54.383868 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:55 localhost.localdomain microshift[132400]: kubelet I0213 04:49:55.211033 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="456bf9cd6f2af0c7204018baf9d7c9d836016bc9f0b37a99b65537fe971465f4" exitCode=0 Feb 13 04:49:55 localhost.localdomain microshift[132400]: kubelet I0213 04:49:55.211063 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:456bf9cd6f2af0c7204018baf9d7c9d836016bc9f0b37a99b65537fe971465f4} Feb 13 04:49:55 localhost.localdomain microshift[132400]: kubelet I0213 04:49:55.211078 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565} Feb 13 04:49:55 localhost.localdomain microshift[132400]: kubelet I0213 04:49:55.211092 132400 scope.go:115] "RemoveContainer" containerID="f11e122914c4662dc6a6ccf32c7d0e79a524285bf725fba28bdf1580d539f273" Feb 13 04:49:56 localhost.localdomain microshift[132400]: kubelet I0213 04:49:56.217818 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 04:49:56 localhost.localdomain microshift[132400]: kubelet I0213 04:49:56.663529 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:49:56 localhost.localdomain microshift[132400]: kubelet E0213 04:49:56.663908 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:49:57 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:49:57.293467 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:49:57 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:49:57.294066 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:49:57 localhost.localdomain microshift[132400]: kubelet I0213 04:49:57.384741 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:49:57 localhost.localdomain microshift[132400]: kubelet I0213 04:49:57.384966 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:49:57 localhost.localdomain microshift[132400]: kubelet I0213 04:49:57.385041 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:49:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:49:58.286539 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:49:59 localhost.localdomain microshift[132400]: kubelet I0213 04:49:59.663747 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:49:59 localhost.localdomain microshift[132400]: kubelet E0213 04:49:59.663928 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:50:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:03.286690 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:04 localhost.localdomain microshift[132400]: kubelet I0213 04:50:04.664567 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:50:04 localhost.localdomain microshift[132400]: kubelet E0213 04:50:04.664902 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:50:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:08.286569 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:09 localhost.localdomain microshift[132400]: kubelet I0213 04:50:09.346283 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:09 localhost.localdomain microshift[132400]: kubelet I0213 04:50:09.346611 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:11 localhost.localdomain microshift[132400]: kubelet I0213 04:50:11.663465 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:50:11 localhost.localdomain microshift[132400]: kubelet E0213 04:50:11.663822 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:50:12 localhost.localdomain microshift[132400]: kubelet I0213 04:50:12.346971 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:12 localhost.localdomain microshift[132400]: kubelet I0213 04:50:12.347025 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:13.286831 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:13 localhost.localdomain microshift[132400]: kubelet I0213 04:50:13.663813 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:50:13 localhost.localdomain microshift[132400]: kubelet E0213 04:50:13.664335 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:50:15 localhost.localdomain microshift[132400]: kubelet I0213 04:50:15.347704 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:15 localhost.localdomain microshift[132400]: kubelet I0213 04:50:15.348119 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:18.286770 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:18 localhost.localdomain microshift[132400]: kubelet I0213 04:50:18.349321 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:18 localhost.localdomain microshift[132400]: kubelet I0213 04:50:18.349370 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:19 localhost.localdomain microshift[132400]: kubelet I0213 04:50:19.663988 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:50:19 localhost.localdomain microshift[132400]: kubelet E0213 04:50:19.664367 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:50:21 localhost.localdomain microshift[132400]: kubelet I0213 04:50:21.349792 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:21 localhost.localdomain microshift[132400]: kubelet I0213 04:50:21.351427 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:22 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:50:22.282368 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:50:22 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:50:22.282593 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:50:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:23.286728 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:24 localhost.localdomain microshift[132400]: kubelet I0213 04:50:24.351716 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:24 localhost.localdomain microshift[132400]: kubelet I0213 04:50:24.351765 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:24 localhost.localdomain microshift[132400]: kubelet I0213 04:50:24.663826 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:50:24 localhost.localdomain microshift[132400]: kubelet E0213 04:50:24.664026 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:50:25 localhost.localdomain microshift[132400]: kubelet I0213 04:50:25.663652 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:50:25 localhost.localdomain microshift[132400]: kubelet E0213 04:50:25.664005 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:50:27 localhost.localdomain microshift[132400]: kubelet I0213 04:50:27.352787 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:27 localhost.localdomain microshift[132400]: kubelet I0213 04:50:27.353110 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:28.287334 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:30 localhost.localdomain microshift[132400]: kubelet I0213 04:50:30.353770 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:30 localhost.localdomain microshift[132400]: kubelet I0213 04:50:30.353838 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:33.286473 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:33 localhost.localdomain microshift[132400]: kubelet I0213 04:50:33.354824 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:33 localhost.localdomain microshift[132400]: kubelet I0213 04:50:33.354892 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:34 localhost.localdomain microshift[132400]: kubelet I0213 04:50:34.663630 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:50:34 localhost.localdomain microshift[132400]: kubelet E0213 04:50:34.664467 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:50:36 localhost.localdomain microshift[132400]: kubelet I0213 04:50:36.355602 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:36 localhost.localdomain microshift[132400]: kubelet I0213 04:50:36.355719 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:36 localhost.localdomain microshift[132400]: kubelet I0213 04:50:36.664057 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:50:36 localhost.localdomain microshift[132400]: kubelet E0213 04:50:36.664537 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:50:37 localhost.localdomain microshift[132400]: kubelet I0213 04:50:37.663572 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:50:37 localhost.localdomain microshift[132400]: kubelet E0213 04:50:37.663801 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:50:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:38.287108 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:39 localhost.localdomain microshift[132400]: kubelet I0213 04:50:39.356632 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:39 localhost.localdomain microshift[132400]: kubelet I0213 04:50:39.356708 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:41 localhost.localdomain microshift[132400]: kubelet I0213 04:50:41.771392 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:50:41 localhost.localdomain microshift[132400]: kubelet E0213 04:50:41.771806 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:52:43.771789842 -0500 EST m=+2850.952136113 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:50:42 localhost.localdomain microshift[132400]: kubelet I0213 04:50:42.357803 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:42 localhost.localdomain microshift[132400]: kubelet I0213 04:50:42.357882 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:42 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:50:42.703029 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:50:42 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:50:42.703061 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:50:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:43.287006 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:45 localhost.localdomain microshift[132400]: kubelet I0213 04:50:45.358598 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:45 localhost.localdomain microshift[132400]: kubelet I0213 04:50:45.358681 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:45 localhost.localdomain microshift[132400]: kubelet I0213 04:50:45.664425 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:50:45 localhost.localdomain microshift[132400]: kubelet E0213 04:50:45.664867 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:50:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:48.286760 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:48 localhost.localdomain microshift[132400]: kubelet I0213 04:50:48.358963 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:48 localhost.localdomain microshift[132400]: kubelet I0213 04:50:48.359149 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:49 localhost.localdomain microshift[132400]: kubelet I0213 04:50:49.663673 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:50:49 localhost.localdomain microshift[132400]: kubelet E0213 04:50:49.664133 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:50:50 localhost.localdomain microshift[132400]: kubelet I0213 04:50:50.664270 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:50:50 localhost.localdomain microshift[132400]: kubelet E0213 04:50:50.664930 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:50:51 localhost.localdomain microshift[132400]: kubelet I0213 04:50:51.360224 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:51 localhost.localdomain microshift[132400]: kubelet I0213 04:50:51.360411 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:53.286595 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:54 localhost.localdomain microshift[132400]: kubelet I0213 04:50:54.360738 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:54 localhost.localdomain microshift[132400]: kubelet I0213 04:50:54.361317 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:57 localhost.localdomain microshift[132400]: kubelet I0213 04:50:57.362249 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:50:57 localhost.localdomain microshift[132400]: kubelet I0213 04:50:57.362774 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:50:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:50:58.286993 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:50:59 localhost.localdomain microshift[132400]: kubelet I0213 04:50:59.663605 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:50:59 localhost.localdomain microshift[132400]: kubelet E0213 04:50:59.664232 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:51:00 localhost.localdomain microshift[132400]: kubelet I0213 04:51:00.363687 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:00 localhost.localdomain microshift[132400]: kubelet I0213 04:51:00.363896 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:01 localhost.localdomain microshift[132400]: kubelet E0213 04:51:01.119292 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:51:01 localhost.localdomain microshift[132400]: kubelet E0213 04:51:01.120512 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:51:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:03.286711 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:03 localhost.localdomain microshift[132400]: kubelet I0213 04:51:03.364112 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:03 localhost.localdomain microshift[132400]: kubelet I0213 04:51:03.364325 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:04 localhost.localdomain microshift[132400]: kubelet I0213 04:51:04.631808 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:04 localhost.localdomain microshift[132400]: kubelet I0213 04:51:04.631858 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:04 localhost.localdomain microshift[132400]: kubelet I0213 04:51:04.664901 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:51:04 localhost.localdomain microshift[132400]: kubelet E0213 04:51:04.665583 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:51:05 localhost.localdomain microshift[132400]: kubelet I0213 04:51:05.663433 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:51:05 localhost.localdomain microshift[132400]: kubelet E0213 04:51:05.663780 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:51:06 localhost.localdomain microshift[132400]: kubelet I0213 04:51:06.365370 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:06 localhost.localdomain microshift[132400]: kubelet I0213 04:51:06.365429 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:08.286739 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:09 localhost.localdomain microshift[132400]: kubelet I0213 04:51:09.366343 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:09 localhost.localdomain microshift[132400]: kubelet I0213 04:51:09.366393 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:12 localhost.localdomain microshift[132400]: kubelet I0213 04:51:12.367296 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:12 localhost.localdomain microshift[132400]: kubelet I0213 04:51:12.367345 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:12 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:51:12.489946 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:51:12 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:51:12.490139 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:51:12 localhost.localdomain microshift[132400]: kubelet I0213 04:51:12.664122 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:51:12 localhost.localdomain microshift[132400]: kubelet E0213 04:51:12.664421 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:51:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:13.286798 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:14 localhost.localdomain microshift[132400]: kubelet I0213 04:51:14.631802 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:14 localhost.localdomain microshift[132400]: kubelet I0213 04:51:14.631852 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:15 localhost.localdomain microshift[132400]: kubelet I0213 04:51:15.367861 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:15 localhost.localdomain microshift[132400]: kubelet I0213 04:51:15.367903 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:18.286975 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:18 localhost.localdomain microshift[132400]: kubelet I0213 04:51:18.368845 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:18 localhost.localdomain microshift[132400]: kubelet I0213 04:51:18.368897 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:19 localhost.localdomain microshift[132400]: kubelet I0213 04:51:19.664148 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:51:19 localhost.localdomain microshift[132400]: kubelet I0213 04:51:19.664550 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:51:19 localhost.localdomain microshift[132400]: kubelet E0213 04:51:19.664795 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:51:19 localhost.localdomain microshift[132400]: kubelet E0213 04:51:19.665027 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:51:21 localhost.localdomain microshift[132400]: kubelet I0213 04:51:21.369764 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:21 localhost.localdomain microshift[132400]: kubelet I0213 04:51:21.369809 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:23.286686 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:24 localhost.localdomain microshift[132400]: kubelet I0213 04:51:24.370635 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:24 localhost.localdomain microshift[132400]: kubelet I0213 04:51:24.370704 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:24 localhost.localdomain microshift[132400]: kubelet I0213 04:51:24.631284 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:24 localhost.localdomain microshift[132400]: kubelet I0213 04:51:24.631589 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:26 localhost.localdomain microshift[132400]: kubelet I0213 04:51:26.666474 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:51:26 localhost.localdomain microshift[132400]: kubelet E0213 04:51:26.667095 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:51:27 localhost.localdomain microshift[132400]: kubelet I0213 04:51:27.370986 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:27 localhost.localdomain microshift[132400]: kubelet I0213 04:51:27.371292 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:28.286320 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:30 localhost.localdomain microshift[132400]: kubelet I0213 04:51:30.371447 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:30 localhost.localdomain microshift[132400]: kubelet I0213 04:51:30.371520 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:31 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:51:31.196267 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:51:31 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:51:31.196304 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:51:32 localhost.localdomain microshift[132400]: kubelet I0213 04:51:32.663916 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:51:32 localhost.localdomain microshift[132400]: kubelet E0213 04:51:32.664254 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:51:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:33.286385 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:33 localhost.localdomain microshift[132400]: kubelet I0213 04:51:33.372024 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:33 localhost.localdomain microshift[132400]: kubelet I0213 04:51:33.372125 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:34 localhost.localdomain microshift[132400]: kubelet I0213 04:51:34.631346 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:34 localhost.localdomain microshift[132400]: kubelet I0213 04:51:34.631729 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:34 localhost.localdomain microshift[132400]: kubelet I0213 04:51:34.664194 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:51:34 localhost.localdomain microshift[132400]: kubelet E0213 04:51:34.664616 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:51:36 localhost.localdomain microshift[132400]: kubelet I0213 04:51:36.372906 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:36 localhost.localdomain microshift[132400]: kubelet I0213 04:51:36.372955 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:38.286499 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:38 localhost.localdomain microshift[132400]: kubelet I0213 04:51:38.666016 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:51:38 localhost.localdomain microshift[132400]: kubelet E0213 04:51:38.666836 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:51:39 localhost.localdomain microshift[132400]: kubelet I0213 04:51:39.374056 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:39 localhost.localdomain microshift[132400]: kubelet I0213 04:51:39.374546 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:42 localhost.localdomain microshift[132400]: kubelet I0213 04:51:42.375277 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:42 localhost.localdomain microshift[132400]: kubelet I0213 04:51:42.375605 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:43.286753 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:44 localhost.localdomain microshift[132400]: kubelet I0213 04:51:44.632276 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:44 localhost.localdomain microshift[132400]: kubelet I0213 04:51:44.632695 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:44 localhost.localdomain microshift[132400]: kubelet I0213 04:51:44.632802 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:51:44 localhost.localdomain microshift[132400]: kubelet I0213 04:51:44.633276 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:51:44 localhost.localdomain microshift[132400]: kubelet I0213 04:51:44.633449 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" gracePeriod=30 Feb 13 04:51:45 localhost.localdomain microshift[132400]: kubelet I0213 04:51:45.376606 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:45 localhost.localdomain microshift[132400]: kubelet I0213 04:51:45.376680 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:45 localhost.localdomain microshift[132400]: kubelet I0213 04:51:45.672005 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:51:45 localhost.localdomain microshift[132400]: kubelet E0213 04:51:45.674142 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:51:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:48.287219 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:48 localhost.localdomain microshift[132400]: kubelet I0213 04:51:48.377049 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:48 localhost.localdomain microshift[132400]: kubelet I0213 04:51:48.377291 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:48 localhost.localdomain microshift[132400]: kubelet I0213 04:51:48.664540 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:51:48 localhost.localdomain microshift[132400]: kubelet E0213 04:51:48.665071 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:51:49 localhost.localdomain microshift[132400]: kubelet I0213 04:51:49.663859 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:51:49 localhost.localdomain microshift[132400]: kubelet E0213 04:51:49.664158 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:51:51 localhost.localdomain microshift[132400]: kubelet I0213 04:51:51.377952 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:51 localhost.localdomain microshift[132400]: kubelet I0213 04:51:51.378002 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:53.286867 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:54 localhost.localdomain microshift[132400]: kubelet I0213 04:51:54.379515 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:54 localhost.localdomain microshift[132400]: kubelet I0213 04:51:54.379611 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:57 localhost.localdomain microshift[132400]: kubelet I0213 04:51:57.380497 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:51:57 localhost.localdomain microshift[132400]: kubelet I0213 04:51:57.380844 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:51:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:51:58.286722 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:51:59 localhost.localdomain microshift[132400]: kubelet I0213 04:51:59.663788 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:51:59 localhost.localdomain microshift[132400]: kubelet E0213 04:51:59.664164 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:52:00 localhost.localdomain microshift[132400]: kubelet I0213 04:52:00.381002 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:52:00 localhost.localdomain microshift[132400]: kubelet I0213 04:52:00.381076 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:52:00 localhost.localdomain microshift[132400]: kubelet I0213 04:52:00.664075 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:52:00 localhost.localdomain microshift[132400]: kubelet E0213 04:52:00.664777 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:52:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:03.287085 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:03 localhost.localdomain microshift[132400]: kubelet I0213 04:52:03.381752 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:52:03 localhost.localdomain microshift[132400]: kubelet I0213 04:52:03.382033 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:52:03 localhost.localdomain microshift[132400]: kubelet I0213 04:52:03.663971 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:52:03 localhost.localdomain microshift[132400]: kubelet E0213 04:52:03.664640 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:52:04 localhost.localdomain microshift[132400]: kubelet E0213 04:52:04.754880 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:52:05 localhost.localdomain microshift[132400]: kubelet I0213 04:52:05.419105 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" exitCode=0 Feb 13 04:52:05 localhost.localdomain microshift[132400]: kubelet I0213 04:52:05.419168 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565} Feb 13 04:52:05 localhost.localdomain microshift[132400]: kubelet I0213 04:52:05.419296 132400 scope.go:115] "RemoveContainer" containerID="456bf9cd6f2af0c7204018baf9d7c9d836016bc9f0b37a99b65537fe971465f4" Feb 13 04:52:05 localhost.localdomain microshift[132400]: kubelet I0213 04:52:05.419497 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:52:05 localhost.localdomain microshift[132400]: kubelet E0213 04:52:05.420739 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:52:06 localhost.localdomain microshift[132400]: kubelet I0213 04:52:06.382968 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:52:06 localhost.localdomain microshift[132400]: kubelet I0213 04:52:06.383057 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:52:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:08.286803 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:08 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:52:08.684256 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:52:08 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:52:08.684454 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:52:09 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:52:09.314373 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:52:09 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:52:09.314718 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:52:12 localhost.localdomain microshift[132400]: kubelet I0213 04:52:12.664242 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:52:12 localhost.localdomain microshift[132400]: kubelet E0213 04:52:12.664700 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:52:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:13.286587 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:13 localhost.localdomain microshift[132400]: kubelet I0213 04:52:13.663883 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:52:13 localhost.localdomain microshift[132400]: kubelet E0213 04:52:13.664255 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:52:15 localhost.localdomain microshift[132400]: kubelet I0213 04:52:15.664738 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:52:15 localhost.localdomain microshift[132400]: kubelet E0213 04:52:15.665418 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:52:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:18.286845 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:19 localhost.localdomain microshift[132400]: kubelet I0213 04:52:19.664006 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:52:19 localhost.localdomain microshift[132400]: kubelet E0213 04:52:19.664545 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:52:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:23.287205 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:24 localhost.localdomain microshift[132400]: kubelet I0213 04:52:24.664230 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:52:24 localhost.localdomain microshift[132400]: kubelet E0213 04:52:24.665293 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:52:26 localhost.localdomain microshift[132400]: kubelet I0213 04:52:26.664076 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:52:26 localhost.localdomain microshift[132400]: kubelet E0213 04:52:26.667872 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:52:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:28.287296 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:30 localhost.localdomain microshift[132400]: kubelet I0213 04:52:30.664277 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:52:30 localhost.localdomain microshift[132400]: kubelet E0213 04:52:30.665325 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:52:32 localhost.localdomain microshift[132400]: kubelet I0213 04:52:32.663758 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:52:32 localhost.localdomain microshift[132400]: kubelet E0213 04:52:32.664292 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:52:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:33.287217 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:36 localhost.localdomain microshift[132400]: kubelet I0213 04:52:36.667770 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:52:36 localhost.localdomain microshift[132400]: kubelet E0213 04:52:36.668553 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:52:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:38.286892 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:39 localhost.localdomain microshift[132400]: kubelet I0213 04:52:39.663791 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:52:39 localhost.localdomain microshift[132400]: kubelet E0213 04:52:39.664307 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:52:40 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:52:40.165218 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:52:40 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:52:40.165374 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:52:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:43.287145 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:43 localhost.localdomain microshift[132400]: kubelet I0213 04:52:43.663328 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:52:43 localhost.localdomain microshift[132400]: kubelet E0213 04:52:43.663820 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:52:43 localhost.localdomain microshift[132400]: kubelet I0213 04:52:43.816719 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:52:43 localhost.localdomain microshift[132400]: kubelet E0213 04:52:43.816915 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:54:45.81689903 -0500 EST m=+2972.997245309 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:52:44 localhost.localdomain microshift[132400]: kubelet I0213 04:52:44.667768 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:52:44 localhost.localdomain microshift[132400]: kubelet E0213 04:52:44.669881 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:52:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:48.286826 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:50 localhost.localdomain microshift[132400]: kubelet I0213 04:52:50.665004 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:52:50 localhost.localdomain microshift[132400]: kubelet E0213 04:52:50.665312 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:52:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:53.287219 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:54 localhost.localdomain microshift[132400]: kubelet I0213 04:52:54.664002 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:52:54 localhost.localdomain microshift[132400]: kubelet E0213 04:52:54.664641 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:52:57 localhost.localdomain microshift[132400]: kubelet I0213 04:52:57.664345 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:52:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:52:58.286816 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:52:58 localhost.localdomain microshift[132400]: kubelet I0213 04:52:58.508544 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7} Feb 13 04:52:58 localhost.localdomain microshift[132400]: kubelet I0213 04:52:58.664438 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:52:58 localhost.localdomain microshift[132400]: kubelet E0213 04:52:58.664853 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:53:01 localhost.localdomain microshift[132400]: kubelet I0213 04:53:01.516645 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" exitCode=1 Feb 13 04:53:01 localhost.localdomain microshift[132400]: kubelet I0213 04:53:01.517001 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7} Feb 13 04:53:01 localhost.localdomain microshift[132400]: kubelet I0213 04:53:01.517069 132400 scope.go:115] "RemoveContainer" containerID="feabd82fac971845a4351275b02f8917137b5383bfdcc58d99aaf8c434588b17" Feb 13 04:53:01 localhost.localdomain microshift[132400]: kubelet I0213 04:53:01.517524 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:53:01 localhost.localdomain microshift[132400]: kubelet E0213 04:53:01.518089 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:53:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:03.286652 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:04 localhost.localdomain microshift[132400]: kubelet E0213 04:53:04.319624 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:53:04 localhost.localdomain microshift[132400]: kubelet E0213 04:53:04.319673 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:53:05 localhost.localdomain microshift[132400]: kubelet I0213 04:53:05.667991 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:53:05 localhost.localdomain microshift[132400]: kubelet E0213 04:53:05.668292 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:53:05 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:53:05.791299 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:53:05 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:53:05.791319 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:53:06 localhost.localdomain microshift[132400]: kubelet I0213 04:53:06.666365 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:53:06 localhost.localdomain microshift[132400]: kubelet E0213 04:53:06.666796 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:53:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:08.287475 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:11 localhost.localdomain microshift[132400]: kubelet I0213 04:53:11.664147 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:53:11 localhost.localdomain microshift[132400]: kubelet E0213 04:53:11.664577 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:53:12 localhost.localdomain microshift[132400]: kubelet I0213 04:53:12.663948 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:53:12 localhost.localdomain microshift[132400]: kubelet E0213 04:53:12.664367 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:53:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:13.287115 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:16 localhost.localdomain microshift[132400]: kubelet I0213 04:53:16.664023 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:53:16 localhost.localdomain microshift[132400]: kubelet E0213 04:53:16.664355 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:53:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:18.286552 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:20 localhost.localdomain microshift[132400]: kubelet I0213 04:53:20.901638 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:53:20 localhost.localdomain microshift[132400]: kubelet I0213 04:53:20.902004 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:53:20 localhost.localdomain microshift[132400]: kubelet E0213 04:53:20.902278 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:53:21 localhost.localdomain microshift[132400]: kubelet I0213 04:53:21.664349 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:53:21 localhost.localdomain microshift[132400]: kubelet E0213 04:53:21.665140 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:53:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:23.286225 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:26 localhost.localdomain microshift[132400]: kubelet I0213 04:53:26.664150 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:53:26 localhost.localdomain microshift[132400]: kubelet E0213 04:53:26.667291 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:53:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:28.286945 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:31 localhost.localdomain microshift[132400]: kubelet I0213 04:53:31.663530 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:53:32 localhost.localdomain microshift[132400]: kubelet I0213 04:53:32.570705 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5} Feb 13 04:53:32 localhost.localdomain microshift[132400]: kubelet I0213 04:53:32.571842 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:53:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:33.286591 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:33 localhost.localdomain microshift[132400]: kubelet I0213 04:53:33.571650 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:53:33 localhost.localdomain microshift[132400]: kubelet I0213 04:53:33.571867 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:53:33 localhost.localdomain microshift[132400]: kubelet I0213 04:53:33.664376 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:53:33 localhost.localdomain microshift[132400]: kubelet E0213 04:53:33.664949 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:53:34 localhost.localdomain microshift[132400]: kubelet I0213 04:53:34.572032 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:53:34 localhost.localdomain microshift[132400]: kubelet I0213 04:53:34.572085 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:53:34 localhost.localdomain microshift[132400]: kubelet I0213 04:53:34.664247 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:53:34 localhost.localdomain microshift[132400]: kubelet E0213 04:53:34.664408 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:53:35 localhost.localdomain microshift[132400]: kubelet I0213 04:53:35.576316 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" exitCode=1 Feb 13 04:53:35 localhost.localdomain microshift[132400]: kubelet I0213 04:53:35.576733 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5} Feb 13 04:53:35 localhost.localdomain microshift[132400]: kubelet I0213 04:53:35.576818 132400 scope.go:115] "RemoveContainer" containerID="3d8e2bb12e6e31e30e6c1feb717d66bebb21595defc9859371b444091b5961cf" Feb 13 04:53:35 localhost.localdomain microshift[132400]: kubelet I0213 04:53:35.577180 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:53:35 localhost.localdomain microshift[132400]: kubelet E0213 04:53:35.577582 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:53:37 localhost.localdomain microshift[132400]: kubelet I0213 04:53:37.663605 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:53:37 localhost.localdomain microshift[132400]: kubelet E0213 04:53:37.665022 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:53:38 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:53:38.098697 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:53:38 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:53:38.098862 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:53:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:38.287161 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:43.287174 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:47 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:53:47.137551 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:53:47 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:53:47.137905 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:53:47 localhost.localdomain microshift[132400]: kubelet I0213 04:53:47.663708 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:53:47 localhost.localdomain microshift[132400]: kubelet E0213 04:53:47.663961 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:53:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:48.286956 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:48 localhost.localdomain microshift[132400]: kubelet I0213 04:53:48.663809 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:53:48 localhost.localdomain microshift[132400]: kubelet E0213 04:53:48.664296 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:53:48 localhost.localdomain microshift[132400]: kubelet I0213 04:53:48.664636 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:53:48 localhost.localdomain microshift[132400]: kubelet E0213 04:53:48.664938 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:53:49 localhost.localdomain microshift[132400]: kubelet I0213 04:53:49.664322 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:53:49 localhost.localdomain microshift[132400]: kubelet E0213 04:53:49.664926 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:53:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:53.286721 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:53:58.286830 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:53:58 localhost.localdomain microshift[132400]: kubelet I0213 04:53:58.664510 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:53:58 localhost.localdomain microshift[132400]: kubelet E0213 04:53:58.665468 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:53:59 localhost.localdomain microshift[132400]: kubelet I0213 04:53:59.664277 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:53:59 localhost.localdomain microshift[132400]: kubelet E0213 04:53:59.665070 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:54:00 localhost.localdomain microshift[132400]: kubelet I0213 04:54:00.664152 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:54:00 localhost.localdomain microshift[132400]: kubelet E0213 04:54:00.664805 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:54:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:03.286952 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:03 localhost.localdomain microshift[132400]: kubelet I0213 04:54:03.663888 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:54:03 localhost.localdomain microshift[132400]: kubelet E0213 04:54:03.664967 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:54:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:08.286627 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:10 localhost.localdomain microshift[132400]: kubelet I0213 04:54:10.664007 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:54:10 localhost.localdomain microshift[132400]: kubelet E0213 04:54:10.664837 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:54:12 localhost.localdomain microshift[132400]: kubelet I0213 04:54:12.663947 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:54:12 localhost.localdomain microshift[132400]: kubelet E0213 04:54:12.664179 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:54:12 localhost.localdomain microshift[132400]: kubelet I0213 04:54:12.664765 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:54:12 localhost.localdomain microshift[132400]: kubelet E0213 04:54:12.665140 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:54:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:13.286289 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:18 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:54:18.177932 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:54:18 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:54:18.177957 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:54:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:18.286930 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:18 localhost.localdomain microshift[132400]: kubelet I0213 04:54:18.663577 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:54:18 localhost.localdomain microshift[132400]: kubelet E0213 04:54:18.664008 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:54:22 localhost.localdomain microshift[132400]: kubelet I0213 04:54:22.663844 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:54:22 localhost.localdomain microshift[132400]: kubelet E0213 04:54:22.664679 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:54:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:23.287214 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:23 localhost.localdomain microshift[132400]: kubelet I0213 04:54:23.663446 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:54:24 localhost.localdomain microshift[132400]: kubelet I0213 04:54:24.656078 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72} Feb 13 04:54:26 localhost.localdomain microshift[132400]: kubelet I0213 04:54:26.192832 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:54:26 localhost.localdomain microshift[132400]: kubelet I0213 04:54:26.193461 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:54:26 localhost.localdomain microshift[132400]: kubelet E0213 04:54:26.193862 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:54:26 localhost.localdomain microshift[132400]: kubelet I0213 04:54:26.665606 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:54:26 localhost.localdomain microshift[132400]: kubelet E0213 04:54:26.666088 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:54:27 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:54:27.839116 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:54:27 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:54:27.839142 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:54:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:28.286743 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:33.287047 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:35 localhost.localdomain microshift[132400]: kubelet I0213 04:54:35.671051 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:54:35 localhost.localdomain microshift[132400]: kubelet E0213 04:54:35.671878 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:54:37 localhost.localdomain microshift[132400]: kubelet I0213 04:54:37.664228 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:54:37 localhost.localdomain microshift[132400]: kubelet E0213 04:54:37.664535 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:54:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:38.286873 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:40 localhost.localdomain microshift[132400]: kubelet I0213 04:54:40.664159 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:54:40 localhost.localdomain microshift[132400]: kubelet E0213 04:54:40.665318 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:54:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:43.286584 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:45 localhost.localdomain microshift[132400]: kubelet I0213 04:54:45.831245 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:54:45 localhost.localdomain microshift[132400]: kubelet E0213 04:54:45.831385 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:56:47.831373722 -0500 EST m=+3095.011719992 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:54:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:48.287028 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:49 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:54:49.433584 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:54:49 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:54:49.433611 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:54:49 localhost.localdomain microshift[132400]: kubelet I0213 04:54:49.663739 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:54:49 localhost.localdomain microshift[132400]: kubelet E0213 04:54:49.664126 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:54:49 localhost.localdomain microshift[132400]: kubelet I0213 04:54:49.664391 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:54:49 localhost.localdomain microshift[132400]: kubelet E0213 04:54:49.665000 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:54:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:53.286776 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:54 localhost.localdomain microshift[132400]: kubelet I0213 04:54:54.665737 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:54:54 localhost.localdomain microshift[132400]: kubelet E0213 04:54:54.666030 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:54:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:54:58.286746 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:54:58 localhost.localdomain microshift[132400]: kubelet I0213 04:54:58.707709 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" exitCode=255 Feb 13 04:54:58 localhost.localdomain microshift[132400]: kubelet I0213 04:54:58.707746 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72} Feb 13 04:54:58 localhost.localdomain microshift[132400]: kubelet I0213 04:54:58.707779 132400 scope.go:115] "RemoveContainer" containerID="44ac8881dd04c3e87ac204197374ec77f7f2027bbc564fb568cf378ec8f7b4ca" Feb 13 04:54:58 localhost.localdomain microshift[132400]: kubelet I0213 04:54:58.708087 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:54:58 localhost.localdomain microshift[132400]: kubelet E0213 04:54:58.708256 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:55:01 localhost.localdomain microshift[132400]: kubelet I0213 04:55:01.663870 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:55:01 localhost.localdomain microshift[132400]: kubelet E0213 04:55:01.664266 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:55:02 localhost.localdomain microshift[132400]: kubelet I0213 04:55:02.665008 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:55:02 localhost.localdomain microshift[132400]: kubelet E0213 04:55:02.665928 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:55:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:03.286463 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:07 localhost.localdomain microshift[132400]: kubelet E0213 04:55:07.524371 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:55:07 localhost.localdomain microshift[132400]: kubelet E0213 04:55:07.524407 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:55:07 localhost.localdomain microshift[132400]: kubelet I0213 04:55:07.663625 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:55:07 localhost.localdomain microshift[132400]: kubelet E0213 04:55:07.664096 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:55:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:08.286853 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:10 localhost.localdomain microshift[132400]: kubelet I0213 04:55:10.664366 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:55:10 localhost.localdomain microshift[132400]: kubelet E0213 04:55:10.664849 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:55:12 localhost.localdomain microshift[132400]: kubelet I0213 04:55:12.664104 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:55:12 localhost.localdomain microshift[132400]: kubelet E0213 04:55:12.664599 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:55:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:13.286847 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:17 localhost.localdomain microshift[132400]: kubelet I0213 04:55:17.664228 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:55:17 localhost.localdomain microshift[132400]: kubelet E0213 04:55:17.664645 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:55:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:18.286947 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:22 localhost.localdomain microshift[132400]: kubelet I0213 04:55:22.665433 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:55:22 localhost.localdomain microshift[132400]: kubelet E0213 04:55:22.665742 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:55:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:23.287112 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:23 localhost.localdomain microshift[132400]: kubelet I0213 04:55:23.664301 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:55:23 localhost.localdomain microshift[132400]: kubelet E0213 04:55:23.664643 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:55:23 localhost.localdomain microshift[132400]: kubelet I0213 04:55:23.665003 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:55:23 localhost.localdomain microshift[132400]: kubelet E0213 04:55:23.665186 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:55:26 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:55:26.145110 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:55:26 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:55:26.145286 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:55:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:28.287171 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:30 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:55:30.143182 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:55:30 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:55:30.143204 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:55:32 localhost.localdomain microshift[132400]: kubelet I0213 04:55:32.663434 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:55:32 localhost.localdomain microshift[132400]: kubelet E0213 04:55:32.664160 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:55:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:33.286875 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:33 localhost.localdomain microshift[132400]: kubelet I0213 04:55:33.664451 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:55:33 localhost.localdomain microshift[132400]: kubelet E0213 04:55:33.665409 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:55:34 localhost.localdomain microshift[132400]: kubelet I0213 04:55:34.664239 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:55:34 localhost.localdomain microshift[132400]: kubelet E0213 04:55:34.664462 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:55:34 localhost.localdomain microshift[132400]: kubelet I0213 04:55:34.664846 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:55:34 localhost.localdomain microshift[132400]: kubelet E0213 04:55:34.665245 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:55:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:38.286559 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:43.286926 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:43 localhost.localdomain microshift[132400]: kubelet I0213 04:55:43.663797 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:55:43 localhost.localdomain microshift[132400]: kubelet E0213 04:55:43.664252 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:55:45 localhost.localdomain microshift[132400]: kubelet I0213 04:55:45.666403 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:55:45 localhost.localdomain microshift[132400]: kubelet E0213 04:55:45.666722 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:55:45 localhost.localdomain microshift[132400]: kubelet I0213 04:55:45.666957 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:55:45 localhost.localdomain microshift[132400]: kubelet E0213 04:55:45.667165 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:55:45 localhost.localdomain microshift[132400]: kubelet I0213 04:55:45.667274 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:55:45 localhost.localdomain microshift[132400]: kubelet E0213 04:55:45.667365 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:55:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:48.286696 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:53.286707 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:56 localhost.localdomain microshift[132400]: kubelet I0213 04:55:56.666340 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:55:56 localhost.localdomain microshift[132400]: kubelet E0213 04:55:56.666870 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:55:57 localhost.localdomain microshift[132400]: kubelet I0213 04:55:57.663759 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:55:57 localhost.localdomain microshift[132400]: kubelet I0213 04:55:57.664095 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:55:57 localhost.localdomain microshift[132400]: kubelet E0213 04:55:57.664312 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:55:57 localhost.localdomain microshift[132400]: kubelet E0213 04:55:57.664356 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:55:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:55:58.286730 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:55:58 localhost.localdomain microshift[132400]: kubelet I0213 04:55:58.664259 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:55:58 localhost.localdomain microshift[132400]: kubelet E0213 04:55:58.664767 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:56:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:03.286980 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:56:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:08.286886 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:56:09 localhost.localdomain microshift[132400]: kubelet I0213 04:56:09.664097 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:56:09 localhost.localdomain microshift[132400]: kubelet E0213 04:56:09.664478 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:56:11 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:56:11.244211 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:56:11 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:56:11.244241 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:56:11 localhost.localdomain microshift[132400]: kubelet I0213 04:56:11.663377 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:56:11 localhost.localdomain microshift[132400]: kubelet I0213 04:56:11.663406 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:56:11 localhost.localdomain microshift[132400]: kubelet E0213 04:56:11.663627 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:56:11 localhost.localdomain microshift[132400]: kubelet E0213 04:56:11.663860 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:56:12 localhost.localdomain microshift[132400]: kubelet I0213 04:56:12.664034 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:56:12 localhost.localdomain microshift[132400]: kubelet E0213 04:56:12.664329 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:56:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:13.286807 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:56:15 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:56:15.412378 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:56:15 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:56:15.412399 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:56:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:18.286755 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:56:22 localhost.localdomain microshift[132400]: kubelet I0213 04:56:22.664231 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:56:22 localhost.localdomain microshift[132400]: kubelet E0213 04:56:22.664867 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:56:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:23.287012 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:56:25 localhost.localdomain microshift[132400]: kubelet I0213 04:56:25.672261 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:56:25 localhost.localdomain microshift[132400]: kubelet E0213 04:56:25.672558 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:56:25 localhost.localdomain microshift[132400]: kubelet I0213 04:56:25.672908 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:56:25 localhost.localdomain microshift[132400]: kubelet E0213 04:56:25.673031 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:56:25 localhost.localdomain microshift[132400]: kubelet I0213 04:56:25.673291 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:56:25 localhost.localdomain microshift[132400]: kubelet E0213 04:56:25.673540 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:56:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:28.286206 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:56:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:33.286393 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:56:36 localhost.localdomain microshift[132400]: kubelet I0213 04:56:36.663940 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:56:36 localhost.localdomain microshift[132400]: kubelet E0213 04:56:36.664194 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:56:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:38.286540 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:56:39 localhost.localdomain microshift[132400]: kubelet I0213 04:56:39.664129 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:56:39 localhost.localdomain microshift[132400]: kubelet E0213 04:56:39.664908 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:56:40 localhost.localdomain microshift[132400]: kubelet I0213 04:56:40.664020 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:56:40 localhost.localdomain microshift[132400]: kubelet E0213 04:56:40.664599 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:56:40 localhost.localdomain microshift[132400]: kubelet I0213 04:56:40.665078 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:56:40 localhost.localdomain microshift[132400]: kubelet E0213 04:56:40.665500 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:56:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:43.286397 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:56:47 localhost.localdomain microshift[132400]: kubelet I0213 04:56:47.852107 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:56:47 localhost.localdomain microshift[132400]: kubelet E0213 04:56:47.852323 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 04:58:49.852307496 -0500 EST m=+3217.032653766 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:56:48 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:56:48.254760 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:56:48 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:56:48.255019 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:56:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:48.286431 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:56:50 localhost.localdomain microshift[132400]: kubelet I0213 04:56:50.665677 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:56:50 localhost.localdomain microshift[132400]: kubelet E0213 04:56:50.666573 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:56:50 localhost.localdomain microshift[132400]: kubelet I0213 04:56:50.667111 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:56:50 localhost.localdomain microshift[132400]: kubelet E0213 04:56:50.667368 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 04:56:51 localhost.localdomain microshift[132400]: kubelet I0213 04:56:51.664200 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:56:51 localhost.localdomain microshift[132400]: kubelet E0213 04:56:51.664442 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:56:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:53.286456 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:56:55 localhost.localdomain microshift[132400]: kubelet I0213 04:56:55.663761 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:56:55 localhost.localdomain microshift[132400]: kubelet E0213 04:56:55.664089 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:56:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:56:58.287093 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:02 localhost.localdomain microshift[132400]: kubelet I0213 04:57:02.665080 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:57:02 localhost.localdomain microshift[132400]: kubelet I0213 04:57:02.665769 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:57:02 localhost.localdomain microshift[132400]: kubelet E0213 04:57:02.666088 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:57:02 localhost.localdomain microshift[132400]: kubelet E0213 04:57:02.666172 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:57:02 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:57:02.678725 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:57:02 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:57:02.678746 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:57:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:03.286911 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:05 localhost.localdomain microshift[132400]: kubelet I0213 04:57:05.666764 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:57:05 localhost.localdomain microshift[132400]: kubelet I0213 04:57:05.908243 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:71f83f979bd8e4c73dc7fb99587d5e8a538073697f5ba88bd55bdc81e0b40977} Feb 13 04:57:05 localhost.localdomain microshift[132400]: kubelet I0213 04:57:05.908746 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:57:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:08.286892 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:10 localhost.localdomain microshift[132400]: kubelet I0213 04:57:10.664173 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:57:10 localhost.localdomain microshift[132400]: kubelet E0213 04:57:10.665064 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:57:10 localhost.localdomain microshift[132400]: kubelet E0213 04:57:10.722271 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:57:10 localhost.localdomain microshift[132400]: kubelet E0213 04:57:10.722333 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:57:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:13.287065 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:15 localhost.localdomain microshift[132400]: kubelet I0213 04:57:15.664294 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:57:15 localhost.localdomain microshift[132400]: kubelet E0213 04:57:15.665059 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:57:16 localhost.localdomain microshift[132400]: kubelet I0213 04:57:16.664221 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:57:16 localhost.localdomain microshift[132400]: kubelet E0213 04:57:16.664539 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:57:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:18.286951 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:18 localhost.localdomain microshift[132400]: kubelet I0213 04:57:18.346916 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:18 localhost.localdomain microshift[132400]: kubelet I0213 04:57:18.347177 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:21 localhost.localdomain microshift[132400]: kubelet I0213 04:57:21.347561 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:21 localhost.localdomain microshift[132400]: kubelet I0213 04:57:21.347621 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:21 localhost.localdomain microshift[132400]: kubelet I0213 04:57:21.663469 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:57:21 localhost.localdomain microshift[132400]: kubelet E0213 04:57:21.663831 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:57:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:23.287079 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:24 localhost.localdomain microshift[132400]: kubelet I0213 04:57:24.348694 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:24 localhost.localdomain microshift[132400]: kubelet I0213 04:57:24.349040 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:27 localhost.localdomain microshift[132400]: kubelet I0213 04:57:27.349365 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:27 localhost.localdomain microshift[132400]: kubelet I0213 04:57:27.349410 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:27 localhost.localdomain microshift[132400]: kubelet I0213 04:57:27.663905 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:57:27 localhost.localdomain microshift[132400]: kubelet E0213 04:57:27.664248 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:57:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:28.287161 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:30 localhost.localdomain microshift[132400]: kubelet I0213 04:57:30.350177 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:30 localhost.localdomain microshift[132400]: kubelet I0213 04:57:30.350236 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:30 localhost.localdomain microshift[132400]: kubelet I0213 04:57:30.663976 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:57:30 localhost.localdomain microshift[132400]: kubelet E0213 04:57:30.664316 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:57:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:33.287182 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:33 localhost.localdomain microshift[132400]: kubelet I0213 04:57:33.350866 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:33 localhost.localdomain microshift[132400]: kubelet I0213 04:57:33.350922 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:34 localhost.localdomain microshift[132400]: kubelet I0213 04:57:34.663889 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:57:34 localhost.localdomain microshift[132400]: kubelet E0213 04:57:34.664570 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:57:36 localhost.localdomain microshift[132400]: kubelet I0213 04:57:36.352790 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:36 localhost.localdomain microshift[132400]: kubelet I0213 04:57:36.352850 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:38.287074 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:39 localhost.localdomain microshift[132400]: kubelet I0213 04:57:39.353895 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:39 localhost.localdomain microshift[132400]: kubelet I0213 04:57:39.353947 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:40 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:57:40.132827 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:57:40 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:57:40.132993 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:57:41 localhost.localdomain microshift[132400]: kubelet I0213 04:57:41.663867 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:57:41 localhost.localdomain microshift[132400]: kubelet E0213 04:57:41.664453 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:57:42 localhost.localdomain microshift[132400]: kubelet I0213 04:57:42.354705 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:42 localhost.localdomain microshift[132400]: kubelet I0213 04:57:42.354754 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:43.286817 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:43 localhost.localdomain microshift[132400]: kubelet I0213 04:57:43.664088 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:57:43 localhost.localdomain microshift[132400]: kubelet E0213 04:57:43.664272 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:57:45 localhost.localdomain microshift[132400]: kubelet I0213 04:57:45.355580 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:45 localhost.localdomain microshift[132400]: kubelet I0213 04:57:45.355858 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:46 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:57:46.824435 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:57:46 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:57:46.824780 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:57:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:48.287277 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:48 localhost.localdomain microshift[132400]: kubelet I0213 04:57:48.356741 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:48 localhost.localdomain microshift[132400]: kubelet I0213 04:57:48.357137 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:48 localhost.localdomain microshift[132400]: kubelet I0213 04:57:48.666873 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:57:48 localhost.localdomain microshift[132400]: kubelet E0213 04:57:48.667309 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:57:51 localhost.localdomain microshift[132400]: kubelet I0213 04:57:51.358590 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:51 localhost.localdomain microshift[132400]: kubelet I0213 04:57:51.358964 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:53.286398 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:57:54 localhost.localdomain microshift[132400]: kubelet I0213 04:57:54.359259 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:54 localhost.localdomain microshift[132400]: kubelet I0213 04:57:54.359807 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:55 localhost.localdomain microshift[132400]: kubelet I0213 04:57:55.672340 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:57:55 localhost.localdomain microshift[132400]: kubelet E0213 04:57:55.672856 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:57:57 localhost.localdomain microshift[132400]: kubelet I0213 04:57:57.360831 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:57:57 localhost.localdomain microshift[132400]: kubelet I0213 04:57:57.361333 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:57:57 localhost.localdomain microshift[132400]: kubelet I0213 04:57:57.663395 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:57:57 localhost.localdomain microshift[132400]: kubelet E0213 04:57:57.663587 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:57:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:57:58.286573 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:00 localhost.localdomain microshift[132400]: kubelet I0213 04:58:00.362399 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:00 localhost.localdomain microshift[132400]: kubelet I0213 04:58:00.362858 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:02 localhost.localdomain microshift[132400]: kubelet I0213 04:58:02.664109 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:58:02 localhost.localdomain microshift[132400]: kubelet E0213 04:58:02.665080 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:58:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:03.286350 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:03 localhost.localdomain microshift[132400]: kubelet I0213 04:58:03.363273 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:03 localhost.localdomain microshift[132400]: kubelet I0213 04:58:03.363532 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:06 localhost.localdomain microshift[132400]: kubelet I0213 04:58:06.367471 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:06 localhost.localdomain microshift[132400]: kubelet I0213 04:58:06.368199 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:06 localhost.localdomain microshift[132400]: kubelet I0213 04:58:06.667304 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:58:07 localhost.localdomain microshift[132400]: kubelet I0213 04:58:07.001991 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985} Feb 13 04:58:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:08.286467 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:09 localhost.localdomain microshift[132400]: kubelet I0213 04:58:09.369100 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:09 localhost.localdomain microshift[132400]: kubelet I0213 04:58:09.369161 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:09 localhost.localdomain microshift[132400]: kubelet I0213 04:58:09.663311 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:58:09 localhost.localdomain microshift[132400]: kubelet E0213 04:58:09.663578 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:58:10 localhost.localdomain microshift[132400]: kubelet I0213 04:58:10.008732 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" exitCode=1 Feb 13 04:58:10 localhost.localdomain microshift[132400]: kubelet I0213 04:58:10.008783 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985} Feb 13 04:58:10 localhost.localdomain microshift[132400]: kubelet I0213 04:58:10.008824 132400 scope.go:115] "RemoveContainer" containerID="a7d6a91c1063e9bcc3dc2d7cc9f69eed1a818bbbeead4b8a63daad4f8c6480a7" Feb 13 04:58:10 localhost.localdomain microshift[132400]: kubelet I0213 04:58:10.009232 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 04:58:10 localhost.localdomain microshift[132400]: kubelet E0213 04:58:10.009579 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:58:12 localhost.localdomain microshift[132400]: kubelet I0213 04:58:12.370051 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:12 localhost.localdomain microshift[132400]: kubelet I0213 04:58:12.370857 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:13.287219 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:14 localhost.localdomain microshift[132400]: kubelet I0213 04:58:14.632012 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:14 localhost.localdomain microshift[132400]: kubelet I0213 04:58:14.632064 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:15 localhost.localdomain microshift[132400]: kubelet I0213 04:58:15.371935 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:15 localhost.localdomain microshift[132400]: kubelet I0213 04:58:15.371983 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:15 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:58:15.553161 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:58:15 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:58:15.553308 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:58:16 localhost.localdomain microshift[132400]: kubelet I0213 04:58:16.663858 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:58:16 localhost.localdomain microshift[132400]: kubelet E0213 04:58:16.667419 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:58:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:18.286406 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:18 localhost.localdomain microshift[132400]: kubelet I0213 04:58:18.372708 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:18 localhost.localdomain microshift[132400]: kubelet I0213 04:58:18.372896 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:20 localhost.localdomain microshift[132400]: kubelet I0213 04:58:20.664149 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:58:20 localhost.localdomain microshift[132400]: kubelet E0213 04:58:20.664304 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:58:20 localhost.localdomain microshift[132400]: kubelet I0213 04:58:20.902096 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 04:58:20 localhost.localdomain microshift[132400]: kubelet I0213 04:58:20.902521 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 04:58:20 localhost.localdomain microshift[132400]: kubelet E0213 04:58:20.902865 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:58:21 localhost.localdomain microshift[132400]: kubelet I0213 04:58:21.373515 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:21 localhost.localdomain microshift[132400]: kubelet I0213 04:58:21.373715 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:23.287005 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:24 localhost.localdomain microshift[132400]: kubelet I0213 04:58:24.374141 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:24 localhost.localdomain microshift[132400]: kubelet I0213 04:58:24.374465 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:24 localhost.localdomain microshift[132400]: kubelet I0213 04:58:24.632071 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:24 localhost.localdomain microshift[132400]: kubelet I0213 04:58:24.632109 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:27 localhost.localdomain microshift[132400]: kubelet I0213 04:58:27.374996 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:27 localhost.localdomain microshift[132400]: kubelet I0213 04:58:27.375043 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:27 localhost.localdomain microshift[132400]: kubelet I0213 04:58:27.664328 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:58:27 localhost.localdomain microshift[132400]: kubelet E0213 04:58:27.664746 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:58:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:28.286898 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:30 localhost.localdomain microshift[132400]: kubelet I0213 04:58:30.375539 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:30 localhost.localdomain microshift[132400]: kubelet I0213 04:58:30.375884 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:33.287013 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:33 localhost.localdomain microshift[132400]: kubelet I0213 04:58:33.376719 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:33 localhost.localdomain microshift[132400]: kubelet I0213 04:58:33.376928 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:34 localhost.localdomain microshift[132400]: kubelet I0213 04:58:34.632168 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:34 localhost.localdomain microshift[132400]: kubelet I0213 04:58:34.632526 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:35 localhost.localdomain microshift[132400]: kubelet I0213 04:58:35.664146 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:58:35 localhost.localdomain microshift[132400]: kubelet I0213 04:58:35.673273 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 04:58:35 localhost.localdomain microshift[132400]: kubelet E0213 04:58:35.673375 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:58:35 localhost.localdomain microshift[132400]: kubelet E0213 04:58:35.673728 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:58:36 localhost.localdomain microshift[132400]: kubelet I0213 04:58:36.377191 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:36 localhost.localdomain microshift[132400]: kubelet I0213 04:58:36.377257 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:37 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:58:37.660743 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:58:37 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:58:37.660805 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:58:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:38.286285 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:39 localhost.localdomain microshift[132400]: kubelet I0213 04:58:39.377917 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:39 localhost.localdomain microshift[132400]: kubelet I0213 04:58:39.378230 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:40 localhost.localdomain microshift[132400]: kubelet I0213 04:58:40.663694 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:58:41 localhost.localdomain microshift[132400]: kubelet I0213 04:58:41.060625 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969} Feb 13 04:58:41 localhost.localdomain microshift[132400]: kubelet I0213 04:58:41.060975 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:58:42 localhost.localdomain microshift[132400]: kubelet I0213 04:58:42.061291 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:42 localhost.localdomain microshift[132400]: kubelet I0213 04:58:42.061717 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:42 localhost.localdomain microshift[132400]: kubelet I0213 04:58:42.378359 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:42 localhost.localdomain microshift[132400]: kubelet I0213 04:58:42.378604 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:43 localhost.localdomain microshift[132400]: kubelet I0213 04:58:43.062887 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:43 localhost.localdomain microshift[132400]: kubelet I0213 04:58:43.062942 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:43.286894 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:44 localhost.localdomain microshift[132400]: kubelet I0213 04:58:44.067154 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" exitCode=1 Feb 13 04:58:44 localhost.localdomain microshift[132400]: kubelet I0213 04:58:44.067184 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969} Feb 13 04:58:44 localhost.localdomain microshift[132400]: kubelet I0213 04:58:44.067205 132400 scope.go:115] "RemoveContainer" containerID="6f9b456c07684578cc3233d1f484c920cf0b84312d21b45cc40cfcc40b863ff5" Feb 13 04:58:44 localhost.localdomain microshift[132400]: kubelet I0213 04:58:44.067448 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 04:58:44 localhost.localdomain microshift[132400]: kubelet E0213 04:58:44.067745 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:58:44 localhost.localdomain microshift[132400]: kubelet I0213 04:58:44.631605 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:44 localhost.localdomain microshift[132400]: kubelet I0213 04:58:44.631685 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:45 localhost.localdomain microshift[132400]: kubelet I0213 04:58:45.378924 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:45 localhost.localdomain microshift[132400]: kubelet I0213 04:58:45.379428 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:47 localhost.localdomain microshift[132400]: kubelet I0213 04:58:47.664367 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 04:58:47 localhost.localdomain microshift[132400]: kubelet E0213 04:58:47.665080 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:58:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:48.287153 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:48 localhost.localdomain microshift[132400]: kubelet I0213 04:58:48.379766 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:48 localhost.localdomain microshift[132400]: kubelet I0213 04:58:48.379986 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:49 localhost.localdomain microshift[132400]: kubelet I0213 04:58:49.663595 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:58:49 localhost.localdomain microshift[132400]: kubelet E0213 04:58:49.663867 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:58:49 localhost.localdomain microshift[132400]: kubelet I0213 04:58:49.869892 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:58:49 localhost.localdomain microshift[132400]: kubelet E0213 04:58:49.870149 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:00:51.870138275 -0500 EST m=+3339.050484544 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 04:58:51 localhost.localdomain microshift[132400]: kubelet I0213 04:58:51.380157 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:51 localhost.localdomain microshift[132400]: kubelet I0213 04:58:51.380192 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:53.286593 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:58:54 localhost.localdomain microshift[132400]: kubelet I0213 04:58:54.380309 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:54 localhost.localdomain microshift[132400]: kubelet I0213 04:58:54.380355 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:54 localhost.localdomain microshift[132400]: kubelet I0213 04:58:54.631818 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:54 localhost.localdomain microshift[132400]: kubelet I0213 04:58:54.632061 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:54 localhost.localdomain microshift[132400]: kubelet I0213 04:58:54.632116 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:58:54 localhost.localdomain microshift[132400]: kubelet I0213 04:58:54.632470 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:71f83f979bd8e4c73dc7fb99587d5e8a538073697f5ba88bd55bdc81e0b40977} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 04:58:54 localhost.localdomain microshift[132400]: kubelet I0213 04:58:54.632773 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://71f83f979bd8e4c73dc7fb99587d5e8a538073697f5ba88bd55bdc81e0b40977" gracePeriod=30 Feb 13 04:58:56 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:58:56.663886 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:58:56 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:58:56.664887 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:58:56 localhost.localdomain microshift[132400]: kubelet I0213 04:58:56.664736 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 04:58:56 localhost.localdomain microshift[132400]: kubelet E0213 04:58:56.668592 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:58:57 localhost.localdomain microshift[132400]: kubelet I0213 04:58:57.380830 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:58:57 localhost.localdomain microshift[132400]: kubelet I0213 04:58:57.381778 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:58:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:58:58.287049 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:00 localhost.localdomain microshift[132400]: kubelet I0213 04:59:00.382652 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:00 localhost.localdomain microshift[132400]: kubelet I0213 04:59:00.383011 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:01 localhost.localdomain microshift[132400]: kubelet I0213 04:59:01.664289 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 04:59:01 localhost.localdomain microshift[132400]: kubelet E0213 04:59:01.664952 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:59:02 localhost.localdomain microshift[132400]: kubelet I0213 04:59:02.664906 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:59:02 localhost.localdomain microshift[132400]: kubelet E0213 04:59:02.665109 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:59:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:03.286674 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:03 localhost.localdomain microshift[132400]: kubelet I0213 04:59:03.384034 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:03 localhost.localdomain microshift[132400]: kubelet I0213 04:59:03.384089 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:06 localhost.localdomain microshift[132400]: kubelet I0213 04:59:06.385268 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:06 localhost.localdomain microshift[132400]: kubelet I0213 04:59:06.385324 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:08.286619 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:08 localhost.localdomain microshift[132400]: kubelet I0213 04:59:08.663413 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 04:59:08 localhost.localdomain microshift[132400]: kubelet E0213 04:59:08.663735 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:59:09 localhost.localdomain microshift[132400]: kubelet I0213 04:59:09.386489 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:09 localhost.localdomain microshift[132400]: kubelet I0213 04:59:09.386980 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:12 localhost.localdomain microshift[132400]: kubelet I0213 04:59:12.387753 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:12 localhost.localdomain microshift[132400]: kubelet I0213 04:59:12.387816 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:12 localhost.localdomain microshift[132400]: kubelet I0213 04:59:12.664289 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 04:59:12 localhost.localdomain microshift[132400]: kubelet E0213 04:59:12.666085 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:59:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:13.287258 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:13 localhost.localdomain microshift[132400]: kubelet E0213 04:59:13.916136 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 04:59:13 localhost.localdomain microshift[132400]: kubelet E0213 04:59:13.916183 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 04:59:15 localhost.localdomain microshift[132400]: kubelet I0213 04:59:15.115147 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="71f83f979bd8e4c73dc7fb99587d5e8a538073697f5ba88bd55bdc81e0b40977" exitCode=0 Feb 13 04:59:15 localhost.localdomain microshift[132400]: kubelet I0213 04:59:15.115450 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:71f83f979bd8e4c73dc7fb99587d5e8a538073697f5ba88bd55bdc81e0b40977} Feb 13 04:59:15 localhost.localdomain microshift[132400]: kubelet I0213 04:59:15.115505 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6} Feb 13 04:59:15 localhost.localdomain microshift[132400]: kubelet I0213 04:59:15.115545 132400 scope.go:115] "RemoveContainer" containerID="2b3a27fc1cea9a8fee58d062ac1de621c28332cfea2deef5d1d39f5392470565" Feb 13 04:59:15 localhost.localdomain microshift[132400]: kubelet I0213 04:59:15.388233 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:15 localhost.localdomain microshift[132400]: kubelet I0213 04:59:15.388492 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:15 localhost.localdomain microshift[132400]: kubelet I0213 04:59:15.388716 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 04:59:16 localhost.localdomain microshift[132400]: kubelet I0213 04:59:16.663806 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:59:16 localhost.localdomain microshift[132400]: kubelet E0213 04:59:16.664047 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:59:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:18.286231 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:22 localhost.localdomain microshift[132400]: kubelet I0213 04:59:22.663968 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 04:59:22 localhost.localdomain microshift[132400]: kubelet E0213 04:59:22.664698 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:59:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:23.286339 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:26 localhost.localdomain microshift[132400]: kubelet I0213 04:59:26.192593 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 04:59:26 localhost.localdomain microshift[132400]: kubelet I0213 04:59:26.193195 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 04:59:26 localhost.localdomain microshift[132400]: kubelet E0213 04:59:26.193558 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:59:26 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:59:26.618816 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:59:26 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:59:26.619023 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 04:59:26 localhost.localdomain microshift[132400]: kubelet I0213 04:59:26.666579 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 04:59:26 localhost.localdomain microshift[132400]: kubelet E0213 04:59:26.667006 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:59:27 localhost.localdomain microshift[132400]: kubelet I0213 04:59:27.346327 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:27 localhost.localdomain microshift[132400]: kubelet I0213 04:59:27.346707 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:27 localhost.localdomain microshift[132400]: kubelet I0213 04:59:27.663558 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:59:27 localhost.localdomain microshift[132400]: kubelet E0213 04:59:27.664033 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:59:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:28.287162 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:30 localhost.localdomain microshift[132400]: kubelet I0213 04:59:30.347005 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:30 localhost.localdomain microshift[132400]: kubelet I0213 04:59:30.347369 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:33.286864 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:33 localhost.localdomain microshift[132400]: kubelet I0213 04:59:33.347809 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:33 localhost.localdomain microshift[132400]: kubelet I0213 04:59:33.348056 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:36 localhost.localdomain microshift[132400]: kubelet I0213 04:59:36.348880 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:36 localhost.localdomain microshift[132400]: kubelet I0213 04:59:36.348964 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:37 localhost.localdomain microshift[132400]: kubelet I0213 04:59:37.663825 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 04:59:37 localhost.localdomain microshift[132400]: kubelet E0213 04:59:37.664792 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:59:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:38.287042 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:38 localhost.localdomain microshift[132400]: kubelet I0213 04:59:38.664021 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 04:59:38 localhost.localdomain microshift[132400]: kubelet I0213 04:59:38.664760 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:59:38 localhost.localdomain microshift[132400]: kubelet E0213 04:59:38.664869 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:59:38 localhost.localdomain microshift[132400]: kubelet E0213 04:59:38.664959 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:59:39 localhost.localdomain microshift[132400]: kubelet I0213 04:59:39.349154 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:39 localhost.localdomain microshift[132400]: kubelet I0213 04:59:39.349414 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:42 localhost.localdomain microshift[132400]: kubelet I0213 04:59:42.350416 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:42 localhost.localdomain microshift[132400]: kubelet I0213 04:59:42.350506 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:43.287208 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:44 localhost.localdomain microshift[132400]: kube-apiserver W0213 04:59:44.543488 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:59:44 localhost.localdomain microshift[132400]: kube-apiserver E0213 04:59:44.543814 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 04:59:45 localhost.localdomain microshift[132400]: kubelet I0213 04:59:45.351442 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:45 localhost.localdomain microshift[132400]: kubelet I0213 04:59:45.351691 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:48.286428 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:48 localhost.localdomain microshift[132400]: kubelet I0213 04:59:48.352463 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:48 localhost.localdomain microshift[132400]: kubelet I0213 04:59:48.352515 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:50 localhost.localdomain microshift[132400]: kubelet I0213 04:59:50.664100 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 04:59:50 localhost.localdomain microshift[132400]: kubelet E0213 04:59:50.664463 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 04:59:51 localhost.localdomain microshift[132400]: kubelet I0213 04:59:51.353765 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:51 localhost.localdomain microshift[132400]: kubelet I0213 04:59:51.354040 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:52 localhost.localdomain microshift[132400]: kubelet I0213 04:59:52.664213 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 04:59:52 localhost.localdomain microshift[132400]: kubelet E0213 04:59:52.664717 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 04:59:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:53.286227 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 04:59:53 localhost.localdomain microshift[132400]: kubelet I0213 04:59:53.663581 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 04:59:53 localhost.localdomain microshift[132400]: kubelet E0213 04:59:53.664210 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 04:59:54 localhost.localdomain microshift[132400]: kubelet I0213 04:59:54.354550 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:54 localhost.localdomain microshift[132400]: kubelet I0213 04:59:54.354611 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:57 localhost.localdomain microshift[132400]: kubelet I0213 04:59:57.355250 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 04:59:57 localhost.localdomain microshift[132400]: kubelet I0213 04:59:57.355290 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 04:59:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 04:59:58.287422 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:00 localhost.localdomain microshift[132400]: kubelet I0213 05:00:00.355994 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:00 localhost.localdomain microshift[132400]: kubelet I0213 05:00:00.356063 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:03.286637 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:03 localhost.localdomain microshift[132400]: kubelet I0213 05:00:03.356994 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:03 localhost.localdomain microshift[132400]: kubelet I0213 05:00:03.357231 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:04 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:00:04.605191 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:00:04 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:00:04.605225 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:00:04 localhost.localdomain microshift[132400]: kubelet I0213 05:00:04.663689 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:00:04 localhost.localdomain microshift[132400]: kubelet E0213 05:00:04.664160 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:00:05 localhost.localdomain microshift[132400]: kubelet I0213 05:00:05.663540 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:00:05 localhost.localdomain microshift[132400]: kubelet E0213 05:00:05.663844 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:00:06 localhost.localdomain microshift[132400]: kubelet I0213 05:00:06.357487 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:06 localhost.localdomain microshift[132400]: kubelet I0213 05:00:06.357574 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:07 localhost.localdomain microshift[132400]: kubelet I0213 05:00:07.664306 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 05:00:08 localhost.localdomain microshift[132400]: kubelet I0213 05:00:08.200857 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719} Feb 13 05:00:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:08.286783 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:09 localhost.localdomain microshift[132400]: kubelet I0213 05:00:09.358011 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:09 localhost.localdomain microshift[132400]: kubelet I0213 05:00:09.358301 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:12 localhost.localdomain microshift[132400]: kubelet I0213 05:00:12.358847 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:12 localhost.localdomain microshift[132400]: kubelet I0213 05:00:12.358906 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:13.286431 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:15 localhost.localdomain microshift[132400]: kubelet I0213 05:00:15.359164 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:15 localhost.localdomain microshift[132400]: kubelet I0213 05:00:15.359601 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:18.286652 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:18 localhost.localdomain microshift[132400]: kubelet I0213 05:00:18.360395 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:18 localhost.localdomain microshift[132400]: kubelet I0213 05:00:18.360451 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:19 localhost.localdomain microshift[132400]: kubelet I0213 05:00:19.663571 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:00:19 localhost.localdomain microshift[132400]: kubelet E0213 05:00:19.663961 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:00:20 localhost.localdomain microshift[132400]: kubelet I0213 05:00:20.664085 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:00:20 localhost.localdomain microshift[132400]: kubelet E0213 05:00:20.664394 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:00:21 localhost.localdomain microshift[132400]: kubelet I0213 05:00:21.361216 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:21 localhost.localdomain microshift[132400]: kubelet I0213 05:00:21.361266 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:22 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:00:22.876334 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:00:22 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:00:22.876371 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:00:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:23.286642 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:24 localhost.localdomain microshift[132400]: kubelet I0213 05:00:24.361445 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:24 localhost.localdomain microshift[132400]: kubelet I0213 05:00:24.361495 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:24 localhost.localdomain microshift[132400]: kubelet I0213 05:00:24.631560 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:24 localhost.localdomain microshift[132400]: kubelet I0213 05:00:24.631882 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:27 localhost.localdomain microshift[132400]: kubelet I0213 05:00:27.362428 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:27 localhost.localdomain microshift[132400]: kubelet I0213 05:00:27.362476 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:28.287042 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:30 localhost.localdomain microshift[132400]: kubelet I0213 05:00:30.363078 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:30 localhost.localdomain microshift[132400]: kubelet I0213 05:00:30.363479 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:30 localhost.localdomain microshift[132400]: kubelet I0213 05:00:30.665184 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:00:30 localhost.localdomain microshift[132400]: kubelet E0213 05:00:30.666297 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:00:31 localhost.localdomain microshift[132400]: kubelet I0213 05:00:31.665071 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:00:31 localhost.localdomain microshift[132400]: kubelet E0213 05:00:31.665417 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:00:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:33.287376 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:33 localhost.localdomain microshift[132400]: kubelet I0213 05:00:33.364740 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:33 localhost.localdomain microshift[132400]: kubelet I0213 05:00:33.364798 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:34 localhost.localdomain microshift[132400]: kubelet I0213 05:00:34.632455 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:34 localhost.localdomain microshift[132400]: kubelet I0213 05:00:34.632810 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:36 localhost.localdomain microshift[132400]: kubelet I0213 05:00:36.365016 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:36 localhost.localdomain microshift[132400]: kubelet I0213 05:00:36.365071 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:38.286885 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:39 localhost.localdomain microshift[132400]: kubelet I0213 05:00:39.365470 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:39 localhost.localdomain microshift[132400]: kubelet I0213 05:00:39.365520 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:42 localhost.localdomain microshift[132400]: kubelet I0213 05:00:42.256085 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" exitCode=255 Feb 13 05:00:42 localhost.localdomain microshift[132400]: kubelet I0213 05:00:42.256124 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719} Feb 13 05:00:42 localhost.localdomain microshift[132400]: kubelet I0213 05:00:42.256149 132400 scope.go:115] "RemoveContainer" containerID="168558a54c0be15ec1265cc778a5399f334ce59802274f91261fbc3401c90f72" Feb 13 05:00:42 localhost.localdomain microshift[132400]: kubelet I0213 05:00:42.256375 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:00:42 localhost.localdomain microshift[132400]: kubelet E0213 05:00:42.256548 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:00:42 localhost.localdomain microshift[132400]: kubelet I0213 05:00:42.366408 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:42 localhost.localdomain microshift[132400]: kubelet I0213 05:00:42.366695 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:42 localhost.localdomain microshift[132400]: kubelet I0213 05:00:42.664060 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:00:42 localhost.localdomain microshift[132400]: kubelet E0213 05:00:42.664735 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:00:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:43.286313 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:43 localhost.localdomain microshift[132400]: kubelet I0213 05:00:43.663448 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:00:43 localhost.localdomain microshift[132400]: kubelet E0213 05:00:43.663978 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:00:44 localhost.localdomain microshift[132400]: kubelet I0213 05:00:44.631623 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:44 localhost.localdomain microshift[132400]: kubelet I0213 05:00:44.631698 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:45 localhost.localdomain microshift[132400]: kubelet I0213 05:00:45.367293 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:45 localhost.localdomain microshift[132400]: kubelet I0213 05:00:45.367354 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:48.286520 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:48 localhost.localdomain microshift[132400]: kubelet I0213 05:00:48.368090 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:48 localhost.localdomain microshift[132400]: kubelet I0213 05:00:48.368305 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:51 localhost.localdomain microshift[132400]: kubelet I0213 05:00:51.368960 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:51 localhost.localdomain microshift[132400]: kubelet I0213 05:00:51.369098 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:51 localhost.localdomain microshift[132400]: kubelet I0213 05:00:51.966767 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:00:51 localhost.localdomain microshift[132400]: kubelet E0213 05:00:51.966992 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:02:53.966978888 -0500 EST m=+3461.147325159 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:00:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:53.286993 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:00:54 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:00:54.255120 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:00:54 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:00:54.255148 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:00:54 localhost.localdomain microshift[132400]: kubelet I0213 05:00:54.369543 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:54 localhost.localdomain microshift[132400]: kubelet I0213 05:00:54.370000 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:54 localhost.localdomain microshift[132400]: kubelet I0213 05:00:54.632228 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:54 localhost.localdomain microshift[132400]: kubelet I0213 05:00:54.632274 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:54 localhost.localdomain microshift[132400]: kubelet I0213 05:00:54.664849 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:00:54 localhost.localdomain microshift[132400]: kubelet E0213 05:00:54.665134 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:00:55 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:00:55.355345 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:00:55 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:00:55.355531 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:00:55 localhost.localdomain microshift[132400]: kubelet I0213 05:00:55.667645 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:00:55 localhost.localdomain microshift[132400]: kubelet E0213 05:00:55.668046 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:00:56 localhost.localdomain microshift[132400]: kubelet I0213 05:00:56.663810 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:00:56 localhost.localdomain microshift[132400]: kubelet E0213 05:00:56.664166 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:00:57 localhost.localdomain microshift[132400]: kubelet I0213 05:00:57.370470 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:00:57 localhost.localdomain microshift[132400]: kubelet I0213 05:00:57.370542 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:00:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:00:58.286613 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:00 localhost.localdomain microshift[132400]: kubelet I0213 05:01:00.371120 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:01:00 localhost.localdomain microshift[132400]: kubelet I0213 05:01:00.371458 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:01:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:03.287258 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:03 localhost.localdomain microshift[132400]: kubelet I0213 05:01:03.372516 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:01:03 localhost.localdomain microshift[132400]: kubelet I0213 05:01:03.372575 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:01:04 localhost.localdomain microshift[132400]: kubelet I0213 05:01:04.631638 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:01:04 localhost.localdomain microshift[132400]: kubelet I0213 05:01:04.632016 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:01:04 localhost.localdomain microshift[132400]: kubelet I0213 05:01:04.632081 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 05:01:04 localhost.localdomain microshift[132400]: kubelet I0213 05:01:04.632476 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 05:01:04 localhost.localdomain microshift[132400]: kubelet I0213 05:01:04.632626 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" gracePeriod=30 Feb 13 05:01:06 localhost.localdomain microshift[132400]: kubelet I0213 05:01:06.373131 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:01:06 localhost.localdomain microshift[132400]: kubelet I0213 05:01:06.373177 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:01:06 localhost.localdomain microshift[132400]: kubelet I0213 05:01:06.664066 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:01:06 localhost.localdomain microshift[132400]: kubelet E0213 05:01:06.666372 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:01:07 localhost.localdomain microshift[132400]: kubelet I0213 05:01:07.663844 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:01:07 localhost.localdomain microshift[132400]: kubelet E0213 05:01:07.664195 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:01:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:08.287171 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:08 localhost.localdomain microshift[132400]: kubelet I0213 05:01:08.663629 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:01:08 localhost.localdomain microshift[132400]: kubelet E0213 05:01:08.664205 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:01:09 localhost.localdomain microshift[132400]: kubelet I0213 05:01:09.373382 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:01:09 localhost.localdomain microshift[132400]: kubelet I0213 05:01:09.373615 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:01:12 localhost.localdomain microshift[132400]: kubelet I0213 05:01:12.374489 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:01:12 localhost.localdomain microshift[132400]: kubelet I0213 05:01:12.374911 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:01:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:13.287025 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:15 localhost.localdomain microshift[132400]: kubelet I0213 05:01:15.375372 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:01:15 localhost.localdomain microshift[132400]: kubelet I0213 05:01:15.375420 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:01:17 localhost.localdomain microshift[132400]: kubelet E0213 05:01:17.113781 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:01:17 localhost.localdomain microshift[132400]: kubelet E0213 05:01:17.114197 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 05:01:17 localhost.localdomain microshift[132400]: kubelet I0213 05:01:17.664414 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:01:17 localhost.localdomain microshift[132400]: kubelet E0213 05:01:17.664936 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:01:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:18.286966 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:18 localhost.localdomain microshift[132400]: kubelet I0213 05:01:18.376431 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:01:18 localhost.localdomain microshift[132400]: kubelet I0213 05:01:18.376621 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:01:19 localhost.localdomain microshift[132400]: kubelet I0213 05:01:19.663566 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:01:19 localhost.localdomain microshift[132400]: kubelet E0213 05:01:19.664222 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:01:21 localhost.localdomain microshift[132400]: kubelet I0213 05:01:21.377713 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:01:21 localhost.localdomain microshift[132400]: kubelet I0213 05:01:21.377772 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:01:22 localhost.localdomain microshift[132400]: kubelet I0213 05:01:22.664394 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:01:22 localhost.localdomain microshift[132400]: kubelet E0213 05:01:22.665472 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:01:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:23.286820 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:24 localhost.localdomain microshift[132400]: kubelet I0213 05:01:24.378919 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:01:24 localhost.localdomain microshift[132400]: kubelet I0213 05:01:24.379390 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:01:24 localhost.localdomain microshift[132400]: kubelet E0213 05:01:24.760486 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:01:25 localhost.localdomain microshift[132400]: kubelet I0213 05:01:25.322875 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" exitCode=0 Feb 13 05:01:25 localhost.localdomain microshift[132400]: kubelet I0213 05:01:25.322903 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6} Feb 13 05:01:25 localhost.localdomain microshift[132400]: kubelet I0213 05:01:25.322925 132400 scope.go:115] "RemoveContainer" containerID="71f83f979bd8e4c73dc7fb99587d5e8a538073697f5ba88bd55bdc81e0b40977" Feb 13 05:01:25 localhost.localdomain microshift[132400]: kubelet I0213 05:01:25.323140 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:01:25 localhost.localdomain microshift[132400]: kubelet E0213 05:01:25.323369 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:01:27 localhost.localdomain microshift[132400]: kubelet I0213 05:01:27.380575 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:01:27 localhost.localdomain microshift[132400]: kubelet I0213 05:01:27.380622 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:01:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:28.286729 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:29 localhost.localdomain microshift[132400]: kubelet I0213 05:01:29.663708 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:01:29 localhost.localdomain microshift[132400]: kubelet E0213 05:01:29.664199 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:01:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:33.286560 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:33 localhost.localdomain microshift[132400]: kubelet I0213 05:01:33.663560 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:01:33 localhost.localdomain microshift[132400]: kubelet E0213 05:01:33.663917 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:01:35 localhost.localdomain microshift[132400]: kubelet I0213 05:01:35.664076 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:01:35 localhost.localdomain microshift[132400]: kubelet E0213 05:01:35.664391 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:01:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:38.286326 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:38 localhost.localdomain microshift[132400]: kubelet I0213 05:01:38.664978 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:01:38 localhost.localdomain microshift[132400]: kubelet E0213 05:01:38.665779 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:01:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:43.286852 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:43 localhost.localdomain microshift[132400]: kubelet I0213 05:01:43.664321 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:01:43 localhost.localdomain microshift[132400]: kubelet E0213 05:01:43.664572 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:01:44 localhost.localdomain microshift[132400]: kubelet I0213 05:01:44.664515 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:01:44 localhost.localdomain microshift[132400]: kubelet E0213 05:01:44.664862 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:01:46 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:01:46.234943 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:01:46 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:01:46.234975 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:01:47 localhost.localdomain microshift[132400]: kubelet I0213 05:01:47.663481 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:01:47 localhost.localdomain microshift[132400]: kubelet E0213 05:01:47.663799 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:01:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:48.286930 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:49 localhost.localdomain microshift[132400]: kubelet I0213 05:01:49.664189 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:01:49 localhost.localdomain microshift[132400]: kubelet E0213 05:01:49.664575 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:01:52 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:01:52.723371 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:01:52 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:01:52.723415 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:01:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:53.286321 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:55 localhost.localdomain microshift[132400]: kubelet I0213 05:01:55.663479 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:01:55 localhost.localdomain microshift[132400]: kubelet E0213 05:01:55.663754 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:01:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:01:58.286400 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:01:59 localhost.localdomain microshift[132400]: kubelet I0213 05:01:59.664114 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:01:59 localhost.localdomain microshift[132400]: kubelet E0213 05:01:59.664765 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:02:00 localhost.localdomain microshift[132400]: kubelet I0213 05:02:00.665725 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:02:00 localhost.localdomain microshift[132400]: kubelet E0213 05:02:00.666037 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:02:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:03.286549 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:03 localhost.localdomain microshift[132400]: kubelet I0213 05:02:03.664297 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:02:03 localhost.localdomain microshift[132400]: kubelet E0213 05:02:03.664774 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:02:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:08.286956 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:09 localhost.localdomain microshift[132400]: kubelet I0213 05:02:09.663726 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:02:09 localhost.localdomain microshift[132400]: kubelet E0213 05:02:09.664176 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:02:11 localhost.localdomain microshift[132400]: kubelet I0213 05:02:11.663891 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:02:11 localhost.localdomain microshift[132400]: kubelet E0213 05:02:11.664457 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:02:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:13.287336 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:13 localhost.localdomain microshift[132400]: kubelet I0213 05:02:13.664520 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:02:13 localhost.localdomain microshift[132400]: kubelet E0213 05:02:13.665150 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:02:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:18.286291 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:18 localhost.localdomain microshift[132400]: kubelet I0213 05:02:18.665031 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:02:18 localhost.localdomain microshift[132400]: kubelet E0213 05:02:18.665614 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:02:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:23.287326 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:23 localhost.localdomain microshift[132400]: kubelet I0213 05:02:23.664099 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:02:23 localhost.localdomain microshift[132400]: kubelet E0213 05:02:23.664440 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:02:24 localhost.localdomain microshift[132400]: kubelet I0213 05:02:24.663519 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:02:24 localhost.localdomain microshift[132400]: kubelet E0213 05:02:24.663982 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:02:27 localhost.localdomain microshift[132400]: kubelet I0213 05:02:27.664039 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:02:27 localhost.localdomain microshift[132400]: kubelet E0213 05:02:27.664717 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:02:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:28.287282 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:30 localhost.localdomain microshift[132400]: kubelet I0213 05:02:30.663939 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:02:30 localhost.localdomain microshift[132400]: kubelet E0213 05:02:30.664191 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:02:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:33.287223 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:36 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:02:36.153961 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:02:36 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:02:36.153994 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:02:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:38.287036 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:38 localhost.localdomain microshift[132400]: kubelet I0213 05:02:38.664011 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:02:38 localhost.localdomain microshift[132400]: kubelet E0213 05:02:38.664491 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:02:38 localhost.localdomain microshift[132400]: kubelet I0213 05:02:38.666635 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:02:38 localhost.localdomain microshift[132400]: kubelet E0213 05:02:38.667413 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:02:40 localhost.localdomain microshift[132400]: kubelet I0213 05:02:40.663510 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:02:40 localhost.localdomain microshift[132400]: kubelet E0213 05:02:40.664205 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:02:41 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:02:41.705898 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:02:41 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:02:41.706227 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:02:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:43.286811 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:45 localhost.localdomain microshift[132400]: kubelet I0213 05:02:45.673056 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:02:45 localhost.localdomain microshift[132400]: kubelet E0213 05:02:45.673360 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:02:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:48.286643 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:50 localhost.localdomain microshift[132400]: kubelet I0213 05:02:50.665063 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:02:50 localhost.localdomain microshift[132400]: kubelet E0213 05:02:50.665613 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:02:52 localhost.localdomain microshift[132400]: kubelet I0213 05:02:52.664435 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:02:52 localhost.localdomain microshift[132400]: kubelet E0213 05:02:52.665206 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:02:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:53.286379 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:53 localhost.localdomain microshift[132400]: kubelet I0213 05:02:53.663554 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:02:53 localhost.localdomain microshift[132400]: kubelet E0213 05:02:53.663817 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:02:54 localhost.localdomain microshift[132400]: kubelet I0213 05:02:54.006488 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:02:54 localhost.localdomain microshift[132400]: kubelet E0213 05:02:54.006843 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:04:56.006824768 -0500 EST m=+3583.187171051 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:02:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:02:58.286800 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:02:59 localhost.localdomain microshift[132400]: kubelet I0213 05:02:59.664307 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:02:59 localhost.localdomain microshift[132400]: kubelet E0213 05:02:59.664578 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:03:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:03.286378 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:04 localhost.localdomain microshift[132400]: kubelet I0213 05:03:04.665157 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:03:04 localhost.localdomain microshift[132400]: kubelet E0213 05:03:04.665969 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:03:06 localhost.localdomain microshift[132400]: kubelet I0213 05:03:06.664024 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:03:06 localhost.localdomain microshift[132400]: kubelet E0213 05:03:06.664295 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:03:07 localhost.localdomain microshift[132400]: kubelet I0213 05:03:07.664364 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:03:07 localhost.localdomain microshift[132400]: kubelet E0213 05:03:07.664985 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:03:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:08.287082 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:11 localhost.localdomain microshift[132400]: kubelet I0213 05:03:11.664015 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:03:11 localhost.localdomain microshift[132400]: kubelet E0213 05:03:11.664788 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:03:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:13.287079 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:15 localhost.localdomain microshift[132400]: kubelet I0213 05:03:15.664494 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:03:15 localhost.localdomain microshift[132400]: kubelet E0213 05:03:15.672588 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:03:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:18.286527 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:20 localhost.localdomain microshift[132400]: kubelet E0213 05:03:20.309577 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:03:20 localhost.localdomain microshift[132400]: kubelet E0213 05:03:20.309608 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 05:03:20 localhost.localdomain microshift[132400]: kubelet I0213 05:03:20.664117 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:03:20 localhost.localdomain microshift[132400]: kubelet E0213 05:03:20.664362 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:03:21 localhost.localdomain microshift[132400]: kubelet I0213 05:03:21.664246 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:03:21 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:03:21.964215 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:03:21 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:03:21.964403 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:03:22 localhost.localdomain microshift[132400]: kubelet I0213 05:03:22.504870 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31} Feb 13 05:03:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:23.286721 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:23 localhost.localdomain microshift[132400]: kubelet I0213 05:03:23.663298 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:03:23 localhost.localdomain microshift[132400]: kubelet E0213 05:03:23.663564 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:03:25 localhost.localdomain microshift[132400]: kubelet I0213 05:03:25.511237 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" exitCode=1 Feb 13 05:03:25 localhost.localdomain microshift[132400]: kubelet I0213 05:03:25.511291 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31} Feb 13 05:03:25 localhost.localdomain microshift[132400]: kubelet I0213 05:03:25.511880 132400 scope.go:115] "RemoveContainer" containerID="947f1142d2cba9c5cbd8d2af9fd52b0269efdb9db32da412098e3829c741a985" Feb 13 05:03:25 localhost.localdomain microshift[132400]: kubelet I0213 05:03:25.512247 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:03:25 localhost.localdomain microshift[132400]: kubelet E0213 05:03:25.512550 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:03:26 localhost.localdomain microshift[132400]: kubelet I0213 05:03:26.664182 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:03:26 localhost.localdomain microshift[132400]: kubelet E0213 05:03:26.664646 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:03:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:28.287202 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:31 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:03:31.679634 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:03:31 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:03:31.679693 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:03:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:33.286214 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:34 localhost.localdomain microshift[132400]: kubelet I0213 05:03:34.664348 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:03:34 localhost.localdomain microshift[132400]: kubelet E0213 05:03:34.664692 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:03:34 localhost.localdomain microshift[132400]: kubelet I0213 05:03:34.665016 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:03:34 localhost.localdomain microshift[132400]: kubelet E0213 05:03:34.665753 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:03:37 localhost.localdomain microshift[132400]: kubelet I0213 05:03:37.664006 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:03:37 localhost.localdomain microshift[132400]: kubelet E0213 05:03:37.664728 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:03:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:38.286234 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:38 localhost.localdomain microshift[132400]: kubelet I0213 05:03:38.663859 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:03:38 localhost.localdomain microshift[132400]: kubelet E0213 05:03:38.664239 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:03:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:43.286922 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:45 localhost.localdomain microshift[132400]: kubelet I0213 05:03:45.672937 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:03:45 localhost.localdomain microshift[132400]: kubelet E0213 05:03:45.673857 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:03:47 localhost.localdomain microshift[132400]: kubelet I0213 05:03:47.663524 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:03:47 localhost.localdomain microshift[132400]: kubelet E0213 05:03:47.664241 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:03:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:48.286485 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:51 localhost.localdomain microshift[132400]: kubelet I0213 05:03:51.664182 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:03:51 localhost.localdomain microshift[132400]: kubelet E0213 05:03:51.664498 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:03:51 localhost.localdomain microshift[132400]: kubelet I0213 05:03:51.664758 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:03:52 localhost.localdomain microshift[132400]: kubelet I0213 05:03:52.554854 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94} Feb 13 05:03:52 localhost.localdomain microshift[132400]: kubelet I0213 05:03:52.557741 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 05:03:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:53.287071 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:53 localhost.localdomain microshift[132400]: kubelet I0213 05:03:53.555744 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:03:53 localhost.localdomain microshift[132400]: kubelet I0213 05:03:53.556016 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:03:54 localhost.localdomain microshift[132400]: kubelet I0213 05:03:54.557704 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:03:54 localhost.localdomain microshift[132400]: kubelet I0213 05:03:54.558169 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:03:55 localhost.localdomain microshift[132400]: kubelet I0213 05:03:55.561224 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" exitCode=1 Feb 13 05:03:55 localhost.localdomain microshift[132400]: kubelet I0213 05:03:55.561266 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94} Feb 13 05:03:55 localhost.localdomain microshift[132400]: kubelet I0213 05:03:55.561290 132400 scope.go:115] "RemoveContainer" containerID="9b554d0dcbec1fbd977647b227ecbe6833b5c9a68a193253a75b66be7752b969" Feb 13 05:03:55 localhost.localdomain microshift[132400]: kubelet I0213 05:03:55.561694 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:03:55 localhost.localdomain microshift[132400]: kubelet E0213 05:03:55.562197 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:03:57 localhost.localdomain microshift[132400]: kubelet I0213 05:03:57.663809 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:03:57 localhost.localdomain microshift[132400]: kubelet E0213 05:03:57.664077 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:03:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:03:58.287019 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:03:58 localhost.localdomain microshift[132400]: kubelet I0213 05:03:58.665677 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:03:58 localhost.localdomain microshift[132400]: kubelet E0213 05:03:58.667305 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:04:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:03.287175 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:05 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:04:05.146197 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:04:05 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:04:05.146639 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:04:06 localhost.localdomain microshift[132400]: kubelet I0213 05:04:06.664383 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:04:06 localhost.localdomain microshift[132400]: kubelet E0213 05:04:06.667213 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:04:07 localhost.localdomain microshift[132400]: kubelet I0213 05:04:07.663923 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:04:07 localhost.localdomain microshift[132400]: kubelet E0213 05:04:07.664496 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:04:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:08.286549 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:09 localhost.localdomain microshift[132400]: kubelet I0213 05:04:09.664150 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:04:09 localhost.localdomain microshift[132400]: kubelet E0213 05:04:09.664450 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:04:12 localhost.localdomain microshift[132400]: kubelet I0213 05:04:12.665361 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:04:12 localhost.localdomain microshift[132400]: kubelet E0213 05:04:12.666258 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:04:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:13.286219 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:18.286271 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:18 localhost.localdomain microshift[132400]: kubelet I0213 05:04:18.664236 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:04:18 localhost.localdomain microshift[132400]: kubelet E0213 05:04:18.664760 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:04:18 localhost.localdomain microshift[132400]: kubelet I0213 05:04:18.665184 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:04:18 localhost.localdomain microshift[132400]: kubelet E0213 05:04:18.665521 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:04:19 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:04:19.663647 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:04:19 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:04:19.663690 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:04:20 localhost.localdomain microshift[132400]: kubelet I0213 05:04:20.901557 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 05:04:20 localhost.localdomain microshift[132400]: kubelet I0213 05:04:20.902061 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:04:20 localhost.localdomain microshift[132400]: kubelet E0213 05:04:20.902435 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:04:21 localhost.localdomain microshift[132400]: kubelet I0213 05:04:21.664111 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:04:21 localhost.localdomain microshift[132400]: kubelet E0213 05:04:21.664480 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:04:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:23.286869 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:26 localhost.localdomain microshift[132400]: kubelet I0213 05:04:26.192517 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 05:04:26 localhost.localdomain microshift[132400]: kubelet I0213 05:04:26.193570 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:04:26 localhost.localdomain microshift[132400]: kubelet E0213 05:04:26.194367 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:04:26 localhost.localdomain microshift[132400]: kubelet I0213 05:04:26.663569 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:04:26 localhost.localdomain microshift[132400]: kubelet E0213 05:04:26.663827 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:04:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:28.287001 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:32 localhost.localdomain microshift[132400]: kubelet I0213 05:04:32.664015 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:04:32 localhost.localdomain microshift[132400]: kubelet E0213 05:04:32.664578 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:04:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:33.287191 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:33 localhost.localdomain microshift[132400]: kubelet I0213 05:04:33.663812 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:04:33 localhost.localdomain microshift[132400]: kubelet E0213 05:04:33.664462 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:04:37 localhost.localdomain microshift[132400]: kubelet I0213 05:04:37.663520 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:04:37 localhost.localdomain microshift[132400]: kubelet E0213 05:04:37.664165 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:04:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:38.287184 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:38 localhost.localdomain microshift[132400]: kubelet I0213 05:04:38.664929 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:04:38 localhost.localdomain microshift[132400]: kubelet E0213 05:04:38.665284 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:04:42 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:04:42.165280 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:04:42 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:04:42.165313 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:04:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:43.287111 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:45 localhost.localdomain microshift[132400]: kubelet I0213 05:04:45.663910 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:04:45 localhost.localdomain microshift[132400]: kubelet E0213 05:04:45.664192 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:04:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:48.287247 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:48 localhost.localdomain microshift[132400]: kubelet I0213 05:04:48.664387 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:04:48 localhost.localdomain microshift[132400]: kubelet E0213 05:04:48.665878 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:04:49 localhost.localdomain microshift[132400]: kubelet I0213 05:04:49.663501 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:04:49 localhost.localdomain microshift[132400]: kubelet E0213 05:04:49.663850 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:04:50 localhost.localdomain microshift[132400]: kubelet I0213 05:04:50.664531 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:04:50 localhost.localdomain microshift[132400]: kubelet E0213 05:04:50.664717 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:04:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:53.286874 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:56 localhost.localdomain microshift[132400]: kubelet I0213 05:04:56.034580 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:04:56 localhost.localdomain microshift[132400]: kubelet E0213 05:04:56.034752 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:06:58.034736464 -0500 EST m=+3705.215082746 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:04:58 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:04:58.227699 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:04:58 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:04:58.227978 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:04:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:04:58.286443 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:04:58 localhost.localdomain microshift[132400]: kubelet I0213 05:04:58.664520 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:04:58 localhost.localdomain microshift[132400]: kubelet E0213 05:04:58.664798 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:04:59 localhost.localdomain microshift[132400]: kubelet I0213 05:04:59.663754 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:04:59 localhost.localdomain microshift[132400]: kubelet E0213 05:04:59.664695 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:05:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:03.286189 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:05:04 localhost.localdomain microshift[132400]: kubelet I0213 05:05:04.663943 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:05:04 localhost.localdomain microshift[132400]: kubelet E0213 05:05:04.664229 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:05:04 localhost.localdomain microshift[132400]: kubelet I0213 05:05:04.664732 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:05:04 localhost.localdomain microshift[132400]: kubelet E0213 05:05:04.665354 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:05:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:08.286817 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:05:10 localhost.localdomain microshift[132400]: kubelet I0213 05:05:10.664057 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:05:10 localhost.localdomain microshift[132400]: kubelet E0213 05:05:10.665495 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:05:11 localhost.localdomain microshift[132400]: kubelet I0213 05:05:11.663511 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:05:11 localhost.localdomain microshift[132400]: kubelet E0213 05:05:11.663956 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:05:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:13.286281 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:05:16 localhost.localdomain microshift[132400]: kubelet I0213 05:05:16.667206 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:05:16 localhost.localdomain microshift[132400]: kubelet E0213 05:05:16.667567 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:05:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:18.286718 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:05:18 localhost.localdomain microshift[132400]: kubelet I0213 05:05:18.663632 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:05:18 localhost.localdomain microshift[132400]: kubelet E0213 05:05:18.664226 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:05:22 localhost.localdomain microshift[132400]: kubelet I0213 05:05:22.664081 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:05:22 localhost.localdomain microshift[132400]: kubelet E0213 05:05:22.664733 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:05:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:23.287117 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:05:23 localhost.localdomain microshift[132400]: kubelet E0213 05:05:23.501236 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:05:23 localhost.localdomain microshift[132400]: kubelet E0213 05:05:23.501277 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 05:05:24 localhost.localdomain microshift[132400]: kubelet I0213 05:05:24.663440 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:05:24 localhost.localdomain microshift[132400]: kubelet E0213 05:05:24.663708 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:05:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:28.286779 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:05:28 localhost.localdomain microshift[132400]: kubelet I0213 05:05:28.665789 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:05:28 localhost.localdomain microshift[132400]: kubelet E0213 05:05:28.666049 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:05:31 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:05:31.202273 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:05:31 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:05:31.202300 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:05:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:33.286979 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:05:33 localhost.localdomain microshift[132400]: kubelet I0213 05:05:33.663783 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:05:33 localhost.localdomain microshift[132400]: kubelet E0213 05:05:33.664104 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:05:35 localhost.localdomain microshift[132400]: kubelet I0213 05:05:35.666760 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:05:35 localhost.localdomain microshift[132400]: kubelet E0213 05:05:35.667430 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:05:37 localhost.localdomain microshift[132400]: kubelet I0213 05:05:37.664332 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:05:37 localhost.localdomain microshift[132400]: kubelet E0213 05:05:37.665006 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:05:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:38.286331 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:05:41 localhost.localdomain microshift[132400]: kubelet I0213 05:05:41.664009 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:05:41 localhost.localdomain microshift[132400]: kubelet E0213 05:05:41.664185 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:05:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:43.287390 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:05:43 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:05:43.935873 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:05:43 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:05:43.936050 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:05:44 localhost.localdomain microshift[132400]: kubelet I0213 05:05:44.664942 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:05:44 localhost.localdomain microshift[132400]: kubelet E0213 05:05:44.665526 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:05:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:48.286808 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:05:49 localhost.localdomain microshift[132400]: kubelet I0213 05:05:49.663858 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:05:49 localhost.localdomain microshift[132400]: kubelet E0213 05:05:49.664440 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:05:50 localhost.localdomain microshift[132400]: kubelet I0213 05:05:50.663282 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:05:50 localhost.localdomain microshift[132400]: kubelet E0213 05:05:50.663521 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:05:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:53.287971 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:05:53 localhost.localdomain microshift[132400]: kubelet I0213 05:05:53.663946 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:05:54 localhost.localdomain microshift[132400]: kubelet I0213 05:05:54.773366 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449} Feb 13 05:05:57 localhost.localdomain microshift[132400]: kubelet I0213 05:05:57.664249 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:05:57 localhost.localdomain microshift[132400]: kubelet E0213 05:05:57.665201 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:05:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:05:58.287381 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:01 localhost.localdomain microshift[132400]: kubelet I0213 05:06:01.664429 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:06:01 localhost.localdomain microshift[132400]: kubelet E0213 05:06:01.664968 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:06:02 localhost.localdomain microshift[132400]: kubelet I0213 05:06:02.664474 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:06:02 localhost.localdomain microshift[132400]: kubelet E0213 05:06:02.664845 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:06:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:03.286540 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:08.286259 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:08 localhost.localdomain microshift[132400]: kubelet I0213 05:06:08.664396 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:06:08 localhost.localdomain microshift[132400]: kubelet E0213 05:06:08.665104 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:06:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:13.286845 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:13 localhost.localdomain microshift[132400]: kubelet I0213 05:06:13.663601 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:06:13 localhost.localdomain microshift[132400]: kubelet E0213 05:06:13.664189 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:06:15 localhost.localdomain microshift[132400]: kubelet I0213 05:06:15.667142 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:06:15 localhost.localdomain microshift[132400]: kubelet E0213 05:06:15.667673 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:06:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:18.286359 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:23.286746 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:23 localhost.localdomain microshift[132400]: kubelet I0213 05:06:23.663600 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:06:23 localhost.localdomain microshift[132400]: kubelet E0213 05:06:23.663963 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:06:24 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:06:24.101308 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:06:24 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:06:24.101336 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:06:24 localhost.localdomain microshift[132400]: kubelet I0213 05:06:24.663980 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:06:25 localhost.localdomain microshift[132400]: kubelet I0213 05:06:25.828762 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:eaa742c4c04e2f1af35bceb6b38e9e8f00b2e2da6188f732a1ce3eab4c621d60} Feb 13 05:06:25 localhost.localdomain microshift[132400]: kubelet I0213 05:06:25.829389 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 05:06:27 localhost.localdomain microshift[132400]: kubelet I0213 05:06:27.663889 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:06:27 localhost.localdomain microshift[132400]: kubelet E0213 05:06:27.664553 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:06:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:28.287249 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:28 localhost.localdomain microshift[132400]: kubelet I0213 05:06:28.833895 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" exitCode=255 Feb 13 05:06:28 localhost.localdomain microshift[132400]: kubelet I0213 05:06:28.833930 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449} Feb 13 05:06:28 localhost.localdomain microshift[132400]: kubelet I0213 05:06:28.833960 132400 scope.go:115] "RemoveContainer" containerID="d64d65f588c2f5f4f06d249b4431be262374d3d654c48634fb6134064fc33719" Feb 13 05:06:28 localhost.localdomain microshift[132400]: kubelet I0213 05:06:28.834286 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:06:28 localhost.localdomain microshift[132400]: kubelet E0213 05:06:28.834510 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:06:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:33.286919 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:38.286228 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:38 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:06:38.378275 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:06:38 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:06:38.378306 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:06:38 localhost.localdomain microshift[132400]: kubelet I0213 05:06:38.664961 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:06:38 localhost.localdomain microshift[132400]: kubelet I0213 05:06:38.665410 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:06:38 localhost.localdomain microshift[132400]: kubelet E0213 05:06:38.665744 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:06:38 localhost.localdomain microshift[132400]: kubelet E0213 05:06:38.665850 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:06:39 localhost.localdomain microshift[132400]: kubelet I0213 05:06:39.347584 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:06:39 localhost.localdomain microshift[132400]: kubelet I0213 05:06:39.347650 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:06:41 localhost.localdomain microshift[132400]: kubelet I0213 05:06:41.664447 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:06:41 localhost.localdomain microshift[132400]: kubelet E0213 05:06:41.664628 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:06:42 localhost.localdomain microshift[132400]: kubelet I0213 05:06:42.347918 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:06:42 localhost.localdomain microshift[132400]: kubelet I0213 05:06:42.347974 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:06:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:43.286642 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:45 localhost.localdomain microshift[132400]: kubelet I0213 05:06:45.349011 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:06:45 localhost.localdomain microshift[132400]: kubelet I0213 05:06:45.349397 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:06:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:48.286695 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:48 localhost.localdomain microshift[132400]: kubelet I0213 05:06:48.349533 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:06:48 localhost.localdomain microshift[132400]: kubelet I0213 05:06:48.349789 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:06:50 localhost.localdomain microshift[132400]: kubelet I0213 05:06:50.664114 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:06:50 localhost.localdomain microshift[132400]: kubelet E0213 05:06:50.664397 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:06:51 localhost.localdomain microshift[132400]: kubelet I0213 05:06:51.351362 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:06:51 localhost.localdomain microshift[132400]: kubelet I0213 05:06:51.351599 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:06:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:53.286998 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:06:53 localhost.localdomain microshift[132400]: kubelet I0213 05:06:53.663953 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:06:53 localhost.localdomain microshift[132400]: kubelet E0213 05:06:53.664337 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:06:54 localhost.localdomain microshift[132400]: kubelet I0213 05:06:54.352586 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:06:54 localhost.localdomain microshift[132400]: kubelet I0213 05:06:54.352672 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:06:56 localhost.localdomain microshift[132400]: kubelet I0213 05:06:56.666852 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:06:56 localhost.localdomain microshift[132400]: kubelet E0213 05:06:56.667059 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:06:57 localhost.localdomain microshift[132400]: kubelet I0213 05:06:57.353335 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:06:57 localhost.localdomain microshift[132400]: kubelet I0213 05:06:57.353383 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:06:58 localhost.localdomain microshift[132400]: kubelet I0213 05:06:58.062153 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:06:58 localhost.localdomain microshift[132400]: kubelet E0213 05:06:58.062278 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:09:00.062264347 -0500 EST m=+3827.242610628 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:06:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:06:58.286252 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:00 localhost.localdomain microshift[132400]: kubelet I0213 05:07:00.353946 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:00 localhost.localdomain microshift[132400]: kubelet I0213 05:07:00.353999 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:03.286917 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:03 localhost.localdomain microshift[132400]: kubelet I0213 05:07:03.354448 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:03 localhost.localdomain microshift[132400]: kubelet I0213 05:07:03.354738 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:05 localhost.localdomain microshift[132400]: kubelet I0213 05:07:05.663718 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:07:05 localhost.localdomain microshift[132400]: kubelet E0213 05:07:05.664006 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:07:06 localhost.localdomain microshift[132400]: kubelet I0213 05:07:06.354889 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:06 localhost.localdomain microshift[132400]: kubelet I0213 05:07:06.354936 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:08.286371 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:08 localhost.localdomain microshift[132400]: kubelet I0213 05:07:08.663940 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:07:08 localhost.localdomain microshift[132400]: kubelet E0213 05:07:08.664447 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:07:09 localhost.localdomain microshift[132400]: kubelet I0213 05:07:09.355398 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:09 localhost.localdomain microshift[132400]: kubelet I0213 05:07:09.355471 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:10 localhost.localdomain microshift[132400]: kubelet I0213 05:07:10.664175 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:07:10 localhost.localdomain microshift[132400]: kubelet E0213 05:07:10.664799 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:07:12 localhost.localdomain microshift[132400]: kubelet I0213 05:07:12.356166 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:12 localhost.localdomain microshift[132400]: kubelet I0213 05:07:12.356594 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:13.286446 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:15 localhost.localdomain microshift[132400]: kubelet I0213 05:07:15.357505 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:15 localhost.localdomain microshift[132400]: kubelet I0213 05:07:15.357890 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:18.286893 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:18 localhost.localdomain microshift[132400]: kubelet I0213 05:07:18.358733 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:18 localhost.localdomain microshift[132400]: kubelet I0213 05:07:18.358799 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:19 localhost.localdomain microshift[132400]: kubelet I0213 05:07:19.663432 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:07:19 localhost.localdomain microshift[132400]: kubelet E0213 05:07:19.664085 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:07:20 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:07:20.123971 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:07:20 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:07:20.124175 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:07:20 localhost.localdomain microshift[132400]: kubelet I0213 05:07:20.665051 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:07:20 localhost.localdomain microshift[132400]: kubelet E0213 05:07:20.665652 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:07:21 localhost.localdomain microshift[132400]: kubelet I0213 05:07:21.359985 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:21 localhost.localdomain microshift[132400]: kubelet I0213 05:07:21.360153 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:23.287267 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:23 localhost.localdomain microshift[132400]: kubelet I0213 05:07:23.663460 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:07:23 localhost.localdomain microshift[132400]: kubelet E0213 05:07:23.663841 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:07:24 localhost.localdomain microshift[132400]: kubelet I0213 05:07:24.361039 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:24 localhost.localdomain microshift[132400]: kubelet I0213 05:07:24.361570 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:26 localhost.localdomain microshift[132400]: kubelet E0213 05:07:26.702134 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:07:26 localhost.localdomain microshift[132400]: kubelet E0213 05:07:26.702185 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 05:07:27 localhost.localdomain microshift[132400]: kubelet I0213 05:07:27.362723 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:27 localhost.localdomain microshift[132400]: kubelet I0213 05:07:27.362935 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:28.286910 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:30 localhost.localdomain microshift[132400]: kubelet I0213 05:07:30.363972 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:30 localhost.localdomain microshift[132400]: kubelet I0213 05:07:30.364294 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:32 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:07:32.507923 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:07:32 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:07:32.508435 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:07:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:33.286381 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:33 localhost.localdomain microshift[132400]: kubelet I0213 05:07:33.365363 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:33 localhost.localdomain microshift[132400]: kubelet I0213 05:07:33.365422 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:33 localhost.localdomain microshift[132400]: kubelet I0213 05:07:33.663917 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:07:33 localhost.localdomain microshift[132400]: kubelet E0213 05:07:33.664747 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:07:34 localhost.localdomain microshift[132400]: kubelet I0213 05:07:34.632206 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:34 localhost.localdomain microshift[132400]: kubelet I0213 05:07:34.632515 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:35 localhost.localdomain microshift[132400]: kubelet I0213 05:07:35.668682 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:07:35 localhost.localdomain microshift[132400]: kubelet E0213 05:07:35.668971 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:07:36 localhost.localdomain microshift[132400]: kubelet I0213 05:07:36.366307 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:36 localhost.localdomain microshift[132400]: kubelet I0213 05:07:36.366368 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:38.286491 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:38 localhost.localdomain microshift[132400]: kubelet I0213 05:07:38.664365 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:07:38 localhost.localdomain microshift[132400]: kubelet E0213 05:07:38.664757 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:07:39 localhost.localdomain microshift[132400]: kubelet I0213 05:07:39.367330 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:39 localhost.localdomain microshift[132400]: kubelet I0213 05:07:39.367922 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:42 localhost.localdomain microshift[132400]: kubelet I0213 05:07:42.369048 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:42 localhost.localdomain microshift[132400]: kubelet I0213 05:07:42.369118 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:43.286313 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:44 localhost.localdomain microshift[132400]: kubelet I0213 05:07:44.632285 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:44 localhost.localdomain microshift[132400]: kubelet I0213 05:07:44.632649 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:45 localhost.localdomain microshift[132400]: kubelet I0213 05:07:45.370008 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:45 localhost.localdomain microshift[132400]: kubelet I0213 05:07:45.370074 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:45 localhost.localdomain microshift[132400]: kubelet I0213 05:07:45.671508 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:07:45 localhost.localdomain microshift[132400]: kubelet E0213 05:07:45.671882 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:07:46 localhost.localdomain microshift[132400]: kubelet I0213 05:07:46.666031 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:07:46 localhost.localdomain microshift[132400]: kubelet E0213 05:07:46.666338 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:07:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:48.286811 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:48 localhost.localdomain microshift[132400]: kubelet I0213 05:07:48.370748 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:48 localhost.localdomain microshift[132400]: kubelet I0213 05:07:48.370800 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:51 localhost.localdomain microshift[132400]: kubelet I0213 05:07:51.370941 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:51 localhost.localdomain microshift[132400]: kubelet I0213 05:07:51.370989 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:53.287860 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:53 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:07:53.511419 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:07:53 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:07:53.511572 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:07:53 localhost.localdomain microshift[132400]: kubelet I0213 05:07:53.664165 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:07:53 localhost.localdomain microshift[132400]: kubelet E0213 05:07:53.664607 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:07:54 localhost.localdomain microshift[132400]: kubelet I0213 05:07:54.371331 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:54 localhost.localdomain microshift[132400]: kubelet I0213 05:07:54.371395 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:54 localhost.localdomain microshift[132400]: kubelet I0213 05:07:54.631635 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:54 localhost.localdomain microshift[132400]: kubelet I0213 05:07:54.631775 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:57 localhost.localdomain microshift[132400]: kubelet I0213 05:07:57.371908 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:07:57 localhost.localdomain microshift[132400]: kubelet I0213 05:07:57.372250 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:07:57 localhost.localdomain microshift[132400]: kubelet I0213 05:07:57.663711 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:07:57 localhost.localdomain microshift[132400]: kubelet E0213 05:07:57.664043 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:07:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:07:58.286488 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:07:58 localhost.localdomain microshift[132400]: kubelet I0213 05:07:58.664128 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:07:58 localhost.localdomain microshift[132400]: kubelet E0213 05:07:58.664875 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:08:00 localhost.localdomain microshift[132400]: kubelet I0213 05:08:00.372869 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:00 localhost.localdomain microshift[132400]: kubelet I0213 05:08:00.373222 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:03.286958 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:03 localhost.localdomain microshift[132400]: kubelet I0213 05:08:03.373456 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:03 localhost.localdomain microshift[132400]: kubelet I0213 05:08:03.373515 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:04 localhost.localdomain microshift[132400]: kubelet I0213 05:08:04.631865 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:04 localhost.localdomain microshift[132400]: kubelet I0213 05:08:04.632264 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:05 localhost.localdomain microshift[132400]: kubelet I0213 05:08:05.669946 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:08:05 localhost.localdomain microshift[132400]: kubelet E0213 05:08:05.670130 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:08:06 localhost.localdomain microshift[132400]: kubelet I0213 05:08:06.374727 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:06 localhost.localdomain microshift[132400]: kubelet I0213 05:08:06.374794 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:08.287195 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:08 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:08:08.345859 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:08:08 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:08:08.346008 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:08:09 localhost.localdomain microshift[132400]: kubelet I0213 05:08:09.375935 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:09 localhost.localdomain microshift[132400]: kubelet I0213 05:08:09.376309 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:11 localhost.localdomain microshift[132400]: kubelet I0213 05:08:11.664029 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:08:11 localhost.localdomain microshift[132400]: kubelet E0213 05:08:11.664417 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:08:12 localhost.localdomain microshift[132400]: kubelet I0213 05:08:12.377005 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:12 localhost.localdomain microshift[132400]: kubelet I0213 05:08:12.377212 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:12 localhost.localdomain microshift[132400]: kubelet I0213 05:08:12.665099 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:08:12 localhost.localdomain microshift[132400]: kubelet E0213 05:08:12.665512 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:08:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:13.286272 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:14 localhost.localdomain microshift[132400]: kubelet I0213 05:08:14.632333 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:14 localhost.localdomain microshift[132400]: kubelet I0213 05:08:14.632385 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:14 localhost.localdomain microshift[132400]: kubelet I0213 05:08:14.632413 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 05:08:14 localhost.localdomain microshift[132400]: kubelet I0213 05:08:14.632814 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:eaa742c4c04e2f1af35bceb6b38e9e8f00b2e2da6188f732a1ce3eab4c621d60} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 05:08:14 localhost.localdomain microshift[132400]: kubelet I0213 05:08:14.632927 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://eaa742c4c04e2f1af35bceb6b38e9e8f00b2e2da6188f732a1ce3eab4c621d60" gracePeriod=30 Feb 13 05:08:15 localhost.localdomain microshift[132400]: kubelet I0213 05:08:15.378215 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:15 localhost.localdomain microshift[132400]: kubelet I0213 05:08:15.378273 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:18.287233 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:18 localhost.localdomain microshift[132400]: kubelet I0213 05:08:18.378359 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:18 localhost.localdomain microshift[132400]: kubelet I0213 05:08:18.378554 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:19 localhost.localdomain microshift[132400]: kubelet I0213 05:08:19.664203 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:08:19 localhost.localdomain microshift[132400]: kubelet E0213 05:08:19.664392 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:08:21 localhost.localdomain microshift[132400]: kubelet I0213 05:08:21.378762 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:21 localhost.localdomain microshift[132400]: kubelet I0213 05:08:21.378814 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:23.286781 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:23 localhost.localdomain microshift[132400]: kubelet I0213 05:08:23.664141 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:08:23 localhost.localdomain microshift[132400]: kubelet E0213 05:08:23.664608 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:08:24 localhost.localdomain microshift[132400]: kubelet I0213 05:08:24.379315 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:24 localhost.localdomain microshift[132400]: kubelet I0213 05:08:24.379373 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:24 localhost.localdomain microshift[132400]: kubelet I0213 05:08:24.664361 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:08:24 localhost.localdomain microshift[132400]: kubelet E0213 05:08:24.664689 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:08:27 localhost.localdomain microshift[132400]: kubelet I0213 05:08:27.379505 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:27 localhost.localdomain microshift[132400]: kubelet I0213 05:08:27.379900 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:28.286467 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:30 localhost.localdomain microshift[132400]: kubelet I0213 05:08:30.380154 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:30 localhost.localdomain microshift[132400]: kubelet I0213 05:08:30.380242 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:32 localhost.localdomain microshift[132400]: kubelet I0213 05:08:32.664027 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:08:32 localhost.localdomain microshift[132400]: kubelet E0213 05:08:32.664734 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:08:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:33.286866 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:33 localhost.localdomain microshift[132400]: kubelet I0213 05:08:33.381189 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:33 localhost.localdomain microshift[132400]: kubelet I0213 05:08:33.381274 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:35 localhost.localdomain microshift[132400]: kubelet I0213 05:08:35.037720 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="eaa742c4c04e2f1af35bceb6b38e9e8f00b2e2da6188f732a1ce3eab4c621d60" exitCode=0 Feb 13 05:08:35 localhost.localdomain microshift[132400]: kubelet I0213 05:08:35.038051 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:eaa742c4c04e2f1af35bceb6b38e9e8f00b2e2da6188f732a1ce3eab4c621d60} Feb 13 05:08:35 localhost.localdomain microshift[132400]: kubelet I0213 05:08:35.038074 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf} Feb 13 05:08:35 localhost.localdomain microshift[132400]: kubelet I0213 05:08:35.038088 132400 scope.go:115] "RemoveContainer" containerID="1447e7f39516ccd0ed8cb6fe45ac98a1e318b1c16b362c8d5c0d1992060cdde6" Feb 13 05:08:36 localhost.localdomain microshift[132400]: kubelet I0213 05:08:36.040540 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 05:08:36 localhost.localdomain microshift[132400]: kubelet I0213 05:08:36.381571 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:36 localhost.localdomain microshift[132400]: kubelet I0213 05:08:36.381625 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:36 localhost.localdomain microshift[132400]: kubelet I0213 05:08:36.381680 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 05:08:36 localhost.localdomain microshift[132400]: kubelet I0213 05:08:36.664332 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:08:37 localhost.localdomain microshift[132400]: kubelet I0213 05:08:37.044453 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add} Feb 13 05:08:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:38.287027 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:39 localhost.localdomain microshift[132400]: kubelet I0213 05:08:39.663930 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:08:39 localhost.localdomain microshift[132400]: kubelet E0213 05:08:39.664324 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:08:40 localhost.localdomain microshift[132400]: kubelet I0213 05:08:40.050887 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" exitCode=1 Feb 13 05:08:40 localhost.localdomain microshift[132400]: kubelet I0213 05:08:40.050933 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add} Feb 13 05:08:40 localhost.localdomain microshift[132400]: kubelet I0213 05:08:40.050963 132400 scope.go:115] "RemoveContainer" containerID="050131b6ed48b3d76f4160fb1cc06f36600e87ad00035b3f22cc0d12abc20b31" Feb 13 05:08:40 localhost.localdomain microshift[132400]: kubelet I0213 05:08:40.051373 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:08:40 localhost.localdomain microshift[132400]: kubelet E0213 05:08:40.051779 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:08:42 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:08:42.390286 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:08:42 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:08:42.390315 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:08:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:43.286605 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:46 localhost.localdomain microshift[132400]: kubelet I0213 05:08:46.664278 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:08:46 localhost.localdomain microshift[132400]: kubelet E0213 05:08:46.664855 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:08:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:48.287061 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:48 localhost.localdomain microshift[132400]: kubelet I0213 05:08:48.347412 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:48 localhost.localdomain microshift[132400]: kubelet I0213 05:08:48.347479 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:51 localhost.localdomain microshift[132400]: kubelet I0213 05:08:51.348586 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:51 localhost.localdomain microshift[132400]: kubelet I0213 05:08:51.348642 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:51 localhost.localdomain microshift[132400]: kubelet I0213 05:08:51.663709 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:08:51 localhost.localdomain microshift[132400]: kubelet E0213 05:08:51.664181 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:08:51 localhost.localdomain microshift[132400]: kubelet I0213 05:08:51.664483 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:08:51 localhost.localdomain microshift[132400]: kubelet E0213 05:08:51.664815 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:08:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:53.286514 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:54 localhost.localdomain microshift[132400]: kubelet I0213 05:08:54.349305 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:54 localhost.localdomain microshift[132400]: kubelet I0213 05:08:54.349360 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:57 localhost.localdomain microshift[132400]: kubelet I0213 05:08:57.350214 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:08:57 localhost.localdomain microshift[132400]: kubelet I0213 05:08:57.350272 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:08:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:08:58.287259 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:08:59 localhost.localdomain microshift[132400]: kubelet I0213 05:08:59.663432 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:08:59 localhost.localdomain microshift[132400]: kubelet E0213 05:08:59.663609 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:09:00 localhost.localdomain microshift[132400]: kubelet I0213 05:09:00.091375 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:09:00 localhost.localdomain microshift[132400]: kubelet E0213 05:09:00.091517 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:11:02.091501809 -0500 EST m=+3949.271848084 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:09:00 localhost.localdomain microshift[132400]: kubelet I0213 05:09:00.351129 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:00 localhost.localdomain microshift[132400]: kubelet I0213 05:09:00.351411 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:01 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:09:01.800406 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:09:01 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:09:01.800428 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:09:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:03.286951 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:03 localhost.localdomain microshift[132400]: kubelet I0213 05:09:03.352332 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:03 localhost.localdomain microshift[132400]: kubelet I0213 05:09:03.352396 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:03 localhost.localdomain microshift[132400]: kubelet I0213 05:09:03.663899 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:09:04 localhost.localdomain microshift[132400]: kubelet I0213 05:09:04.090958 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c} Feb 13 05:09:04 localhost.localdomain microshift[132400]: kubelet I0213 05:09:04.092842 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 05:09:05 localhost.localdomain microshift[132400]: kubelet I0213 05:09:05.092116 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:05 localhost.localdomain microshift[132400]: kubelet I0213 05:09:05.092154 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:06 localhost.localdomain microshift[132400]: kubelet I0213 05:09:06.093512 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:06 localhost.localdomain microshift[132400]: kubelet I0213 05:09:06.093565 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:06 localhost.localdomain microshift[132400]: kubelet I0213 05:09:06.353800 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:06 localhost.localdomain microshift[132400]: kubelet I0213 05:09:06.353855 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:06 localhost.localdomain microshift[132400]: kubelet I0213 05:09:06.663598 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:09:06 localhost.localdomain microshift[132400]: kubelet E0213 05:09:06.663915 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:09:07 localhost.localdomain microshift[132400]: kubelet I0213 05:09:07.097275 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" exitCode=1 Feb 13 05:09:07 localhost.localdomain microshift[132400]: kubelet I0213 05:09:07.097647 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c} Feb 13 05:09:07 localhost.localdomain microshift[132400]: kubelet I0213 05:09:07.097758 132400 scope.go:115] "RemoveContainer" containerID="671d1e4fb34ca1b8b4107829cf57da5a5a4fc24d04a7f001104193a04b998d94" Feb 13 05:09:07 localhost.localdomain microshift[132400]: kubelet I0213 05:09:07.098142 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:09:07 localhost.localdomain microshift[132400]: kubelet E0213 05:09:07.098791 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:09:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:08.287530 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:09 localhost.localdomain microshift[132400]: kubelet I0213 05:09:09.354765 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:09 localhost.localdomain microshift[132400]: kubelet I0213 05:09:09.355174 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:12 localhost.localdomain microshift[132400]: kubelet I0213 05:09:12.355337 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:12 localhost.localdomain microshift[132400]: kubelet I0213 05:09:12.355790 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:13.286447 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:14 localhost.localdomain microshift[132400]: kubelet I0213 05:09:14.664337 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:09:14 localhost.localdomain microshift[132400]: kubelet E0213 05:09:14.665235 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:09:15 localhost.localdomain microshift[132400]: kubelet I0213 05:09:15.356253 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:15 localhost.localdomain microshift[132400]: kubelet I0213 05:09:15.356309 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:17 localhost.localdomain microshift[132400]: kubelet I0213 05:09:17.663634 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:09:17 localhost.localdomain microshift[132400]: kubelet E0213 05:09:17.664605 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:09:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:18.287274 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:18 localhost.localdomain microshift[132400]: kubelet I0213 05:09:18.356752 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:18 localhost.localdomain microshift[132400]: kubelet I0213 05:09:18.356804 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:20 localhost.localdomain microshift[132400]: kubelet I0213 05:09:20.902223 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 05:09:20 localhost.localdomain microshift[132400]: kubelet I0213 05:09:20.903166 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:09:20 localhost.localdomain microshift[132400]: kubelet E0213 05:09:20.903575 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:09:21 localhost.localdomain microshift[132400]: kubelet I0213 05:09:21.357725 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:21 localhost.localdomain microshift[132400]: kubelet I0213 05:09:21.357781 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:21 localhost.localdomain microshift[132400]: kubelet I0213 05:09:21.664650 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:09:21 localhost.localdomain microshift[132400]: kubelet E0213 05:09:21.665112 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:09:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:23.286685 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:24 localhost.localdomain microshift[132400]: kubelet I0213 05:09:24.358589 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:24 localhost.localdomain microshift[132400]: kubelet I0213 05:09:24.358672 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:26 localhost.localdomain microshift[132400]: kubelet I0213 05:09:26.192335 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 05:09:26 localhost.localdomain microshift[132400]: kubelet I0213 05:09:26.192630 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:09:26 localhost.localdomain microshift[132400]: kubelet E0213 05:09:26.192986 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:09:27 localhost.localdomain microshift[132400]: kubelet I0213 05:09:27.359095 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:27 localhost.localdomain microshift[132400]: kubelet I0213 05:09:27.359452 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:27 localhost.localdomain microshift[132400]: kubelet I0213 05:09:27.663558 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:09:27 localhost.localdomain microshift[132400]: kubelet E0213 05:09:27.663962 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:09:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:28.286674 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:29 localhost.localdomain microshift[132400]: kubelet E0213 05:09:29.928463 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[kube-api-access-5gtpr default-certificate service-ca-bundle]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:09:29 localhost.localdomain microshift[132400]: kubelet E0213 05:09:29.928943 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[kube-api-access-5gtpr default-certificate service-ca-bundle]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 05:09:30 localhost.localdomain microshift[132400]: kubelet I0213 05:09:30.360206 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:30 localhost.localdomain microshift[132400]: kubelet I0213 05:09:30.360526 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:30 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:09:30.594631 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:09:30 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:09:30.594677 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:09:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:33.286882 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:33 localhost.localdomain microshift[132400]: kubelet I0213 05:09:33.361780 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:33 localhost.localdomain microshift[132400]: kubelet I0213 05:09:33.361861 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:34 localhost.localdomain microshift[132400]: kubelet I0213 05:09:34.663508 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:09:34 localhost.localdomain microshift[132400]: kubelet E0213 05:09:34.664399 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:09:36 localhost.localdomain microshift[132400]: kubelet I0213 05:09:36.362985 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:36 localhost.localdomain microshift[132400]: kubelet I0213 05:09:36.363027 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:38.286332 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:39 localhost.localdomain microshift[132400]: kubelet I0213 05:09:39.363891 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:39 localhost.localdomain microshift[132400]: kubelet I0213 05:09:39.363938 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:41 localhost.localdomain microshift[132400]: kubelet I0213 05:09:41.664147 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:09:41 localhost.localdomain microshift[132400]: kubelet I0213 05:09:41.664818 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:09:41 localhost.localdomain microshift[132400]: kubelet E0213 05:09:41.664858 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:09:41 localhost.localdomain microshift[132400]: kubelet E0213 05:09:41.665180 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:09:42 localhost.localdomain microshift[132400]: kubelet I0213 05:09:42.364396 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:42 localhost.localdomain microshift[132400]: kubelet I0213 05:09:42.364452 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:43.286840 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:44 localhost.localdomain microshift[132400]: kubelet I0213 05:09:44.632723 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:44 localhost.localdomain microshift[132400]: kubelet I0213 05:09:44.633075 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:45 localhost.localdomain microshift[132400]: kubelet I0213 05:09:45.364886 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:45 localhost.localdomain microshift[132400]: kubelet I0213 05:09:45.365070 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:48.286715 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:48 localhost.localdomain microshift[132400]: kubelet I0213 05:09:48.365429 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:48 localhost.localdomain microshift[132400]: kubelet I0213 05:09:48.365620 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:48 localhost.localdomain microshift[132400]: kubelet I0213 05:09:48.665565 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:09:48 localhost.localdomain microshift[132400]: kubelet E0213 05:09:48.666048 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:09:51 localhost.localdomain microshift[132400]: kubelet I0213 05:09:51.366087 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:51 localhost.localdomain microshift[132400]: kubelet I0213 05:09:51.366134 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:53.286711 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:54 localhost.localdomain microshift[132400]: kubelet I0213 05:09:54.366483 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:54 localhost.localdomain microshift[132400]: kubelet I0213 05:09:54.366911 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:54 localhost.localdomain microshift[132400]: kubelet I0213 05:09:54.631444 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:54 localhost.localdomain microshift[132400]: kubelet I0213 05:09:54.631736 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:54 localhost.localdomain microshift[132400]: kubelet I0213 05:09:54.663851 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:09:54 localhost.localdomain microshift[132400]: kubelet E0213 05:09:54.664335 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:09:56 localhost.localdomain microshift[132400]: kubelet I0213 05:09:56.666952 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:09:56 localhost.localdomain microshift[132400]: kubelet E0213 05:09:56.667368 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:09:57 localhost.localdomain microshift[132400]: kubelet I0213 05:09:57.367961 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:09:57 localhost.localdomain microshift[132400]: kubelet I0213 05:09:57.368021 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:09:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:09:58.286488 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:09:59 localhost.localdomain microshift[132400]: kubelet I0213 05:09:59.664294 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:09:59 localhost.localdomain microshift[132400]: kubelet E0213 05:09:59.664580 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:10:00 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:10:00.224980 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:10:00 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:10:00.225009 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:10:00 localhost.localdomain microshift[132400]: kubelet I0213 05:10:00.368514 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:00 localhost.localdomain microshift[132400]: kubelet I0213 05:10:00.368802 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:03.286895 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:10:03 localhost.localdomain microshift[132400]: kubelet I0213 05:10:03.369996 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:03 localhost.localdomain microshift[132400]: kubelet I0213 05:10:03.370274 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:04 localhost.localdomain microshift[132400]: kubelet I0213 05:10:04.630975 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:04 localhost.localdomain microshift[132400]: kubelet I0213 05:10:04.631331 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:06 localhost.localdomain microshift[132400]: kubelet I0213 05:10:06.370449 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:06 localhost.localdomain microshift[132400]: kubelet I0213 05:10:06.370493 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:08.286511 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:10:08 localhost.localdomain microshift[132400]: kubelet I0213 05:10:08.663928 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:10:08 localhost.localdomain microshift[132400]: kubelet E0213 05:10:08.664398 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:10:09 localhost.localdomain microshift[132400]: kubelet I0213 05:10:09.371223 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:09 localhost.localdomain microshift[132400]: kubelet I0213 05:10:09.371793 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:10 localhost.localdomain microshift[132400]: kubelet I0213 05:10:10.664220 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:10:10 localhost.localdomain microshift[132400]: kubelet E0213 05:10:10.665200 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:10:12 localhost.localdomain microshift[132400]: kubelet I0213 05:10:12.372745 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:12 localhost.localdomain microshift[132400]: kubelet I0213 05:10:12.373173 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:12 localhost.localdomain microshift[132400]: kubelet I0213 05:10:12.665319 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:10:12 localhost.localdomain microshift[132400]: kubelet E0213 05:10:12.665776 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:10:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:13.286879 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:10:14 localhost.localdomain microshift[132400]: kubelet I0213 05:10:14.631711 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:14 localhost.localdomain microshift[132400]: kubelet I0213 05:10:14.631790 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:14 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:10:14.772986 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:10:14 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:10:14.773138 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:10:15 localhost.localdomain microshift[132400]: kubelet I0213 05:10:15.374493 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:15 localhost.localdomain microshift[132400]: kubelet I0213 05:10:15.374547 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:18.287031 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:10:18 localhost.localdomain microshift[132400]: kubelet I0213 05:10:18.375110 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:18 localhost.localdomain microshift[132400]: kubelet I0213 05:10:18.375159 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:20 localhost.localdomain microshift[132400]: kubelet I0213 05:10:20.663482 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:10:20 localhost.localdomain microshift[132400]: kubelet E0213 05:10:20.663819 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:10:21 localhost.localdomain microshift[132400]: kubelet I0213 05:10:21.376035 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:21 localhost.localdomain microshift[132400]: kubelet I0213 05:10:21.376080 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:23.286882 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:10:24 localhost.localdomain microshift[132400]: kubelet I0213 05:10:24.376502 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:24 localhost.localdomain microshift[132400]: kubelet I0213 05:10:24.376554 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:24 localhost.localdomain microshift[132400]: kubelet I0213 05:10:24.632484 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:24 localhost.localdomain microshift[132400]: kubelet I0213 05:10:24.632757 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:24 localhost.localdomain microshift[132400]: kubelet I0213 05:10:24.632850 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 05:10:24 localhost.localdomain microshift[132400]: kubelet I0213 05:10:24.633243 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 05:10:24 localhost.localdomain microshift[132400]: kubelet I0213 05:10:24.633390 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" gracePeriod=30 Feb 13 05:10:24 localhost.localdomain microshift[132400]: kubelet I0213 05:10:24.663486 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:10:24 localhost.localdomain microshift[132400]: kubelet E0213 05:10:24.663825 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:10:25 localhost.localdomain microshift[132400]: kubelet I0213 05:10:25.663653 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:10:25 localhost.localdomain microshift[132400]: kubelet E0213 05:10:25.663958 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:10:27 localhost.localdomain microshift[132400]: kubelet I0213 05:10:27.377410 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:27 localhost.localdomain microshift[132400]: kubelet I0213 05:10:27.377514 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:28.286262 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:10:30 localhost.localdomain microshift[132400]: kubelet I0213 05:10:30.378211 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:30 localhost.localdomain microshift[132400]: kubelet I0213 05:10:30.378289 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:31 localhost.localdomain microshift[132400]: kubelet I0213 05:10:31.664388 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:10:31 localhost.localdomain microshift[132400]: kubelet E0213 05:10:31.665204 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:10:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:33.286461 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:10:33 localhost.localdomain microshift[132400]: kubelet I0213 05:10:33.378807 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:33 localhost.localdomain microshift[132400]: kubelet I0213 05:10:33.379089 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:36 localhost.localdomain microshift[132400]: kubelet I0213 05:10:36.380123 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:36 localhost.localdomain microshift[132400]: kubelet I0213 05:10:36.380267 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:36 localhost.localdomain microshift[132400]: kubelet I0213 05:10:36.665842 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:10:36 localhost.localdomain microshift[132400]: kubelet E0213 05:10:36.666037 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:10:37 localhost.localdomain microshift[132400]: kubelet I0213 05:10:37.664431 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:10:37 localhost.localdomain microshift[132400]: kubelet E0213 05:10:37.664832 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:10:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:38.287894 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:10:39 localhost.localdomain microshift[132400]: kubelet I0213 05:10:39.381016 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:39 localhost.localdomain microshift[132400]: kubelet I0213 05:10:39.381328 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:42 localhost.localdomain microshift[132400]: kubelet I0213 05:10:42.382221 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:42 localhost.localdomain microshift[132400]: kubelet I0213 05:10:42.382817 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:43.286986 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:10:43 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:10:43.652369 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:10:43 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:10:43.652398 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:10:44 localhost.localdomain microshift[132400]: kubelet E0213 05:10:44.749986 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:10:44 localhost.localdomain microshift[132400]: kubelet I0213 05:10:44.804200 132400 scope.go:115] "RemoveContainer" containerID="eaa742c4c04e2f1af35bceb6b38e9e8f00b2e2da6188f732a1ce3eab4c621d60" Feb 13 05:10:45 localhost.localdomain microshift[132400]: kubelet I0213 05:10:45.258081 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" exitCode=0 Feb 13 05:10:45 localhost.localdomain microshift[132400]: kubelet I0213 05:10:45.258246 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf} Feb 13 05:10:45 localhost.localdomain microshift[132400]: kubelet I0213 05:10:45.258497 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:10:45 localhost.localdomain microshift[132400]: kubelet E0213 05:10:45.258842 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:10:45 localhost.localdomain microshift[132400]: kubelet I0213 05:10:45.383714 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:10:45 localhost.localdomain microshift[132400]: kubelet I0213 05:10:45.383765 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:10:46 localhost.localdomain microshift[132400]: kubelet I0213 05:10:46.665307 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:10:46 localhost.localdomain microshift[132400]: kubelet E0213 05:10:46.666064 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:10:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:48.286911 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:10:48 localhost.localdomain microshift[132400]: kubelet I0213 05:10:48.664142 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:10:48 localhost.localdomain microshift[132400]: kubelet I0213 05:10:48.664820 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:10:48 localhost.localdomain microshift[132400]: kubelet E0213 05:10:48.665036 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:10:48 localhost.localdomain microshift[132400]: kubelet E0213 05:10:48.665267 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:10:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:53.286213 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:10:55 localhost.localdomain microshift[132400]: kubelet I0213 05:10:55.667047 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:10:55 localhost.localdomain microshift[132400]: kubelet E0213 05:10:55.667802 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:10:57 localhost.localdomain microshift[132400]: kubelet I0213 05:10:57.664188 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:10:57 localhost.localdomain microshift[132400]: kubelet E0213 05:10:57.664945 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:10:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:10:58.286547 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:00 localhost.localdomain microshift[132400]: kubelet I0213 05:11:00.664071 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:11:00 localhost.localdomain microshift[132400]: kubelet E0213 05:11:00.664965 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:11:02 localhost.localdomain microshift[132400]: kubelet I0213 05:11:02.125265 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:11:02 localhost.localdomain microshift[132400]: kubelet E0213 05:11:02.125382 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:13:04.125372114 -0500 EST m=+4071.305718392 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:11:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:03.286820 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:03 localhost.localdomain microshift[132400]: kubelet I0213 05:11:03.663999 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:11:03 localhost.localdomain microshift[132400]: kubelet E0213 05:11:03.664676 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:11:06 localhost.localdomain microshift[132400]: kubelet I0213 05:11:06.666094 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:11:06 localhost.localdomain microshift[132400]: kubelet E0213 05:11:06.666380 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:11:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:08.286405 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:10 localhost.localdomain microshift[132400]: kubelet I0213 05:11:10.665038 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:11:10 localhost.localdomain microshift[132400]: kubelet E0213 05:11:10.665716 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:11:12 localhost.localdomain microshift[132400]: kubelet I0213 05:11:12.663597 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:11:12 localhost.localdomain microshift[132400]: kubelet E0213 05:11:12.664344 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:11:12 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:11:12.858552 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:11:12 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:11:12.858578 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:11:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:13.287229 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:16 localhost.localdomain microshift[132400]: kubelet I0213 05:11:16.666284 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:11:16 localhost.localdomain microshift[132400]: kubelet E0213 05:11:16.666614 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:11:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:18.286214 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:20 localhost.localdomain microshift[132400]: kubelet I0213 05:11:20.663631 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:11:20 localhost.localdomain microshift[132400]: kubelet E0213 05:11:20.664383 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:11:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:23.287006 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:24 localhost.localdomain microshift[132400]: kubelet I0213 05:11:24.664835 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:11:24 localhost.localdomain microshift[132400]: kubelet E0213 05:11:24.666013 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:11:27 localhost.localdomain microshift[132400]: kubelet I0213 05:11:27.664041 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:11:27 localhost.localdomain microshift[132400]: kubelet E0213 05:11:27.664224 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:11:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:28.287024 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:31 localhost.localdomain microshift[132400]: kubelet I0213 05:11:31.664114 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:11:31 localhost.localdomain microshift[132400]: kubelet E0213 05:11:31.664863 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:11:32 localhost.localdomain microshift[132400]: kubelet I0213 05:11:32.664041 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:11:32 localhost.localdomain microshift[132400]: kubelet E0213 05:11:32.664430 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:11:33 localhost.localdomain microshift[132400]: kubelet E0213 05:11:33.136915 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:11:33 localhost.localdomain microshift[132400]: kubelet E0213 05:11:33.136956 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 05:11:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:33.286906 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:33 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:11:33.459000 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:11:33 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:11:33.459202 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:11:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:38.286692 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:38 localhost.localdomain microshift[132400]: kubelet I0213 05:11:38.663517 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:11:38 localhost.localdomain microshift[132400]: kubelet E0213 05:11:38.664042 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:11:39 localhost.localdomain microshift[132400]: kubelet I0213 05:11:39.664089 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:11:40 localhost.localdomain microshift[132400]: kubelet I0213 05:11:40.344388 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6} Feb 13 05:11:42 localhost.localdomain microshift[132400]: kubelet I0213 05:11:42.664275 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:11:42 localhost.localdomain microshift[132400]: kubelet E0213 05:11:42.665025 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:11:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:43.286748 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:46 localhost.localdomain microshift[132400]: kubelet I0213 05:11:46.664421 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:11:46 localhost.localdomain microshift[132400]: kubelet E0213 05:11:46.667315 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:11:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:48.286584 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:51 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:11:51.378020 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:11:51 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:11:51.378044 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:11:52 localhost.localdomain microshift[132400]: kubelet I0213 05:11:52.664255 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:11:52 localhost.localdomain microshift[132400]: kubelet E0213 05:11:52.664561 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:11:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:53.286772 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:11:54 localhost.localdomain microshift[132400]: kubelet I0213 05:11:54.664737 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:11:54 localhost.localdomain microshift[132400]: kubelet E0213 05:11:54.665587 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:11:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:11:58.286588 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:00 localhost.localdomain microshift[132400]: kubelet I0213 05:12:00.664222 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:12:00 localhost.localdomain microshift[132400]: kubelet E0213 05:12:00.664497 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:12:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:03.286949 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:05 localhost.localdomain microshift[132400]: kubelet I0213 05:12:05.663405 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:12:05 localhost.localdomain microshift[132400]: kubelet E0213 05:12:05.663686 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:12:06 localhost.localdomain microshift[132400]: kubelet I0213 05:12:06.663968 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:12:06 localhost.localdomain microshift[132400]: kubelet E0213 05:12:06.667444 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:12:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:08.286747 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:11 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:12:11.202927 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:12:11 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:12:11.202962 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:12:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:13.286465 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:13 localhost.localdomain microshift[132400]: kubelet I0213 05:12:13.664517 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:12:13 localhost.localdomain microshift[132400]: kubelet E0213 05:12:13.664844 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:12:14 localhost.localdomain microshift[132400]: kubelet I0213 05:12:14.400987 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" exitCode=255 Feb 13 05:12:14 localhost.localdomain microshift[132400]: kubelet I0213 05:12:14.401016 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6} Feb 13 05:12:14 localhost.localdomain microshift[132400]: kubelet I0213 05:12:14.401037 132400 scope.go:115] "RemoveContainer" containerID="b2f7494070131abed49ea72f39a189a05fda93052640f22416a3d3945179f449" Feb 13 05:12:14 localhost.localdomain microshift[132400]: kubelet I0213 05:12:14.401236 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:12:14 localhost.localdomain microshift[132400]: kubelet E0213 05:12:14.401371 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:12:17 localhost.localdomain microshift[132400]: kubelet I0213 05:12:17.664033 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:12:17 localhost.localdomain microshift[132400]: kubelet E0213 05:12:17.664511 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:12:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:18.286526 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:19 localhost.localdomain microshift[132400]: kubelet I0213 05:12:19.663986 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:12:19 localhost.localdomain microshift[132400]: kubelet E0213 05:12:19.664581 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:12:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:23.286643 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:25 localhost.localdomain microshift[132400]: kubelet I0213 05:12:25.663401 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:12:25 localhost.localdomain microshift[132400]: kubelet E0213 05:12:25.663882 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:12:26 localhost.localdomain microshift[132400]: kubelet I0213 05:12:26.664023 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:12:26 localhost.localdomain microshift[132400]: kubelet E0213 05:12:26.664263 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:12:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:28.286549 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:28 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:12:28.697747 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:12:28 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:12:28.697968 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:12:29 localhost.localdomain microshift[132400]: kubelet I0213 05:12:29.664229 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:12:29 localhost.localdomain microshift[132400]: kubelet E0213 05:12:29.664887 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:12:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:33.286951 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:33 localhost.localdomain microshift[132400]: kubelet I0213 05:12:33.663805 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:12:33 localhost.localdomain microshift[132400]: kubelet E0213 05:12:33.664499 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:12:37 localhost.localdomain microshift[132400]: kubelet I0213 05:12:37.663772 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:12:37 localhost.localdomain microshift[132400]: kubelet E0213 05:12:37.664231 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:12:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:38.286649 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:39 localhost.localdomain microshift[132400]: kubelet I0213 05:12:39.663897 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:12:39 localhost.localdomain microshift[132400]: kubelet E0213 05:12:39.664152 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:12:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:43.286213 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:43 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:12:43.298916 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:12:43 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:12:43.299056 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:12:44 localhost.localdomain microshift[132400]: kubelet I0213 05:12:44.665247 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:12:44 localhost.localdomain microshift[132400]: kubelet E0213 05:12:44.665975 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:12:47 localhost.localdomain microshift[132400]: kubelet I0213 05:12:47.664227 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:12:47 localhost.localdomain microshift[132400]: kubelet E0213 05:12:47.664866 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:12:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:48.287183 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:50 localhost.localdomain microshift[132400]: kubelet I0213 05:12:50.664076 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:12:50 localhost.localdomain microshift[132400]: kubelet E0213 05:12:50.664648 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:12:52 localhost.localdomain microshift[132400]: kubelet I0213 05:12:52.664905 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:12:52 localhost.localdomain microshift[132400]: kubelet E0213 05:12:52.665419 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:12:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:53.287217 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:12:58.286844 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:12:58 localhost.localdomain microshift[132400]: kubelet I0213 05:12:58.664310 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:12:58 localhost.localdomain microshift[132400]: kubelet E0213 05:12:58.664992 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:12:59 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:12:59.618364 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:12:59 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:12:59.618936 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:13:02 localhost.localdomain microshift[132400]: kubelet I0213 05:13:02.663987 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:13:02 localhost.localdomain microshift[132400]: kubelet E0213 05:13:02.664486 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:13:02 localhost.localdomain microshift[132400]: kubelet I0213 05:13:02.664576 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:13:02 localhost.localdomain microshift[132400]: kubelet E0213 05:13:02.665255 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:13:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:03.286550 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:03 localhost.localdomain microshift[132400]: kubelet I0213 05:13:03.664202 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:13:03 localhost.localdomain microshift[132400]: kubelet E0213 05:13:03.664608 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:13:04 localhost.localdomain microshift[132400]: kubelet I0213 05:13:04.148824 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:13:04 localhost.localdomain microshift[132400]: kubelet E0213 05:13:04.148963 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:15:06.148949381 -0500 EST m=+4193.329295660 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:13:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:08.287094 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:09 localhost.localdomain microshift[132400]: kubelet I0213 05:13:09.664371 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:13:09 localhost.localdomain microshift[132400]: kubelet E0213 05:13:09.664644 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:13:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:13.286754 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:14 localhost.localdomain microshift[132400]: kubelet I0213 05:13:14.664392 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:13:14 localhost.localdomain microshift[132400]: kubelet E0213 05:13:14.665214 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:13:15 localhost.localdomain microshift[132400]: kubelet I0213 05:13:15.666968 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:13:15 localhost.localdomain microshift[132400]: kubelet E0213 05:13:15.667122 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:13:17 localhost.localdomain microshift[132400]: kubelet I0213 05:13:17.664306 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:13:17 localhost.localdomain microshift[132400]: kubelet E0213 05:13:17.664646 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:13:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:18.286910 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:22 localhost.localdomain microshift[132400]: kubelet I0213 05:13:22.664764 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:13:22 localhost.localdomain microshift[132400]: kubelet E0213 05:13:22.665306 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:13:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:23.286492 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:25 localhost.localdomain microshift[132400]: kubelet I0213 05:13:25.663683 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:13:25 localhost.localdomain microshift[132400]: kubelet E0213 05:13:25.664443 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:13:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:28.286454 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:28 localhost.localdomain microshift[132400]: kubelet I0213 05:13:28.663559 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:13:28 localhost.localdomain microshift[132400]: kubelet E0213 05:13:28.663751 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:13:29 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:13:29.370896 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:13:29 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:13:29.371206 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:13:29 localhost.localdomain microshift[132400]: kubelet I0213 05:13:29.663902 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:13:29 localhost.localdomain microshift[132400]: kubelet E0213 05:13:29.664215 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:13:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:33.286865 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:34 localhost.localdomain microshift[132400]: kubelet I0213 05:13:34.664310 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:13:34 localhost.localdomain microshift[132400]: kubelet E0213 05:13:34.665257 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:13:36 localhost.localdomain microshift[132400]: kubelet E0213 05:13:36.333778 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[kube-api-access-5gtpr default-certificate service-ca-bundle]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:13:36 localhost.localdomain microshift[132400]: kubelet E0213 05:13:36.333800 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[kube-api-access-5gtpr default-certificate service-ca-bundle]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 05:13:37 localhost.localdomain microshift[132400]: kubelet I0213 05:13:37.663407 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:13:37 localhost.localdomain microshift[132400]: kubelet E0213 05:13:37.663702 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:13:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:38.287124 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:40 localhost.localdomain microshift[132400]: kubelet I0213 05:13:40.664339 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:13:40 localhost.localdomain microshift[132400]: kubelet E0213 05:13:40.665289 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:13:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:43.286843 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:43 localhost.localdomain microshift[132400]: kubelet I0213 05:13:43.664008 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:13:43 localhost.localdomain microshift[132400]: kubelet E0213 05:13:43.664388 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:13:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:48.287116 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:49 localhost.localdomain microshift[132400]: kubelet I0213 05:13:49.663695 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:13:50 localhost.localdomain microshift[132400]: kubelet I0213 05:13:50.555345 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea} Feb 13 05:13:51 localhost.localdomain microshift[132400]: kubelet I0213 05:13:51.663863 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:13:51 localhost.localdomain microshift[132400]: kubelet E0213 05:13:51.664228 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:13:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:53.287248 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:53 localhost.localdomain microshift[132400]: kubelet I0213 05:13:53.561388 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" exitCode=1 Feb 13 05:13:53 localhost.localdomain microshift[132400]: kubelet I0213 05:13:53.561443 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea} Feb 13 05:13:53 localhost.localdomain microshift[132400]: kubelet I0213 05:13:53.561650 132400 scope.go:115] "RemoveContainer" containerID="8cef7ee7a300929a2b17465c60f98dfb26380e3d6c286696ccf7991fffaa3add" Feb 13 05:13:53 localhost.localdomain microshift[132400]: kubelet I0213 05:13:53.561920 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:13:53 localhost.localdomain microshift[132400]: kubelet E0213 05:13:53.562185 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:13:53 localhost.localdomain microshift[132400]: kubelet I0213 05:13:53.664064 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:13:53 localhost.localdomain microshift[132400]: kubelet E0213 05:13:53.664458 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:13:55 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:13:55.851869 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:13:55 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:13:55.852121 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:13:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:13:58.286742 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:13:58 localhost.localdomain microshift[132400]: kubelet I0213 05:13:58.664107 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:13:58 localhost.localdomain microshift[132400]: kubelet E0213 05:13:58.664849 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:14:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:03.286183 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:14:04 localhost.localdomain microshift[132400]: kubelet I0213 05:14:04.664476 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:14:04 localhost.localdomain microshift[132400]: kubelet E0213 05:14:04.664881 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:14:04 localhost.localdomain microshift[132400]: kubelet I0213 05:14:04.665278 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:14:04 localhost.localdomain microshift[132400]: kubelet E0213 05:14:04.665463 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:14:05 localhost.localdomain microshift[132400]: kubelet I0213 05:14:05.665397 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:14:05 localhost.localdomain microshift[132400]: kubelet E0213 05:14:05.665733 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:14:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:08.287131 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:14:09 localhost.localdomain microshift[132400]: kubelet I0213 05:14:09.664249 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:14:10 localhost.localdomain microshift[132400]: kubelet I0213 05:14:10.589690 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a} Feb 13 05:14:10 localhost.localdomain microshift[132400]: kubelet I0213 05:14:10.590242 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 05:14:11 localhost.localdomain microshift[132400]: kubelet I0213 05:14:11.590175 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:14:11 localhost.localdomain microshift[132400]: kubelet I0213 05:14:11.590231 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:14:12 localhost.localdomain microshift[132400]: kubelet I0213 05:14:12.591863 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:14:12 localhost.localdomain microshift[132400]: kubelet I0213 05:14:12.592205 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:14:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:13.286936 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:14:13 localhost.localdomain microshift[132400]: kubelet I0213 05:14:13.594752 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" exitCode=1 Feb 13 05:14:13 localhost.localdomain microshift[132400]: kubelet I0213 05:14:13.595047 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a} Feb 13 05:14:13 localhost.localdomain microshift[132400]: kubelet I0213 05:14:13.595098 132400 scope.go:115] "RemoveContainer" containerID="3701b6e8f245c5a712a5834a677a1a721db2aaaa96829efae8efd45d508b980c" Feb 13 05:14:13 localhost.localdomain microshift[132400]: kubelet I0213 05:14:13.595366 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:14:13 localhost.localdomain microshift[132400]: kubelet E0213 05:14:13.596730 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:14:16 localhost.localdomain microshift[132400]: kubelet I0213 05:14:16.666077 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:14:16 localhost.localdomain microshift[132400]: kubelet E0213 05:14:16.666350 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:14:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:18.286594 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:14:18 localhost.localdomain microshift[132400]: kubelet I0213 05:14:18.664023 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:14:18 localhost.localdomain microshift[132400]: kubelet E0213 05:14:18.664402 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:14:19 localhost.localdomain microshift[132400]: kubelet I0213 05:14:19.663338 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:14:19 localhost.localdomain microshift[132400]: kubelet E0213 05:14:19.663960 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:14:20 localhost.localdomain microshift[132400]: kubelet I0213 05:14:20.902277 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 05:14:20 localhost.localdomain microshift[132400]: kubelet I0213 05:14:20.903077 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:14:20 localhost.localdomain microshift[132400]: kubelet E0213 05:14:20.903431 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:14:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:23.286969 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:14:23 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:14:23.794437 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:14:23 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:14:23.794601 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:14:26 localhost.localdomain microshift[132400]: kubelet I0213 05:14:26.192711 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 05:14:26 localhost.localdomain microshift[132400]: kubelet I0213 05:14:26.193082 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:14:26 localhost.localdomain microshift[132400]: kubelet E0213 05:14:26.193428 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:14:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:28.286470 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:14:30 localhost.localdomain microshift[132400]: kubelet I0213 05:14:30.664399 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:14:30 localhost.localdomain microshift[132400]: kubelet E0213 05:14:30.665102 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:14:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:33.286627 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:14:33 localhost.localdomain microshift[132400]: kubelet I0213 05:14:33.664144 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:14:33 localhost.localdomain microshift[132400]: kubelet E0213 05:14:33.664344 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:14:34 localhost.localdomain microshift[132400]: kubelet I0213 05:14:34.664357 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:14:34 localhost.localdomain microshift[132400]: kubelet E0213 05:14:34.664759 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:14:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:38.286197 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:14:40 localhost.localdomain microshift[132400]: kubelet I0213 05:14:40.664131 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:14:40 localhost.localdomain microshift[132400]: kubelet E0213 05:14:40.664440 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:14:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:43.286809 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:14:44 localhost.localdomain microshift[132400]: kubelet I0213 05:14:44.664756 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:14:44 localhost.localdomain microshift[132400]: kubelet E0213 05:14:44.665208 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:14:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:48.286992 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:14:48 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:14:48.333436 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:14:48 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:14:48.333461 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:14:48 localhost.localdomain microshift[132400]: kubelet I0213 05:14:48.664108 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:14:48 localhost.localdomain microshift[132400]: kubelet E0213 05:14:48.664440 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:14:48 localhost.localdomain microshift[132400]: kubelet I0213 05:14:48.664844 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:14:48 localhost.localdomain microshift[132400]: kubelet E0213 05:14:48.665137 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:14:51 localhost.localdomain microshift[132400]: kubelet I0213 05:14:51.664178 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:14:51 localhost.localdomain microshift[132400]: kubelet E0213 05:14:51.664490 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:14:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:53.286862 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:14:57 localhost.localdomain microshift[132400]: kubelet I0213 05:14:57.663475 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:14:57 localhost.localdomain microshift[132400]: kubelet E0213 05:14:57.664090 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:14:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:14:58.287242 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:01 localhost.localdomain microshift[132400]: kubelet I0213 05:15:01.664584 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:15:01 localhost.localdomain microshift[132400]: kubelet E0213 05:15:01.664782 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:15:02 localhost.localdomain microshift[132400]: kubelet I0213 05:15:02.663805 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:15:02 localhost.localdomain microshift[132400]: kubelet E0213 05:15:02.664069 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:15:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:03.287053 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:05 localhost.localdomain microshift[132400]: kubelet I0213 05:15:05.665506 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:15:05 localhost.localdomain microshift[132400]: kubelet E0213 05:15:05.666969 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:15:06 localhost.localdomain microshift[132400]: kubelet I0213 05:15:06.208945 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:15:06 localhost.localdomain microshift[132400]: kubelet E0213 05:15:06.209266 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:17:08.209253919 -0500 EST m=+4315.389600189 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:15:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:08.286429 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:10 localhost.localdomain microshift[132400]: kubelet I0213 05:15:10.663933 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:15:10 localhost.localdomain microshift[132400]: kubelet E0213 05:15:10.664254 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:15:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:13.286471 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:14 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:15:14.121910 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:15:14 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:15:14.122060 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:15:15 localhost.localdomain microshift[132400]: kubelet I0213 05:15:15.668241 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:15:15 localhost.localdomain microshift[132400]: kubelet E0213 05:15:15.668397 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:15:15 localhost.localdomain microshift[132400]: kubelet I0213 05:15:15.671819 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:15:15 localhost.localdomain microshift[132400]: kubelet E0213 05:15:15.672123 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:15:16 localhost.localdomain microshift[132400]: kubelet I0213 05:15:16.664670 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:15:16 localhost.localdomain microshift[132400]: kubelet E0213 05:15:16.665084 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:15:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:18.286887 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:23.286706 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:25 localhost.localdomain microshift[132400]: kubelet I0213 05:15:25.669151 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:15:25 localhost.localdomain microshift[132400]: kubelet E0213 05:15:25.669386 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:15:26 localhost.localdomain microshift[132400]: kubelet I0213 05:15:26.665957 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:15:26 localhost.localdomain microshift[132400]: kubelet E0213 05:15:26.666207 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:15:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:28.286806 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:29 localhost.localdomain microshift[132400]: kubelet I0213 05:15:29.663564 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:15:29 localhost.localdomain microshift[132400]: kubelet E0213 05:15:29.664284 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:15:30 localhost.localdomain microshift[132400]: kubelet I0213 05:15:30.664884 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:15:30 localhost.localdomain microshift[132400]: kubelet E0213 05:15:30.665286 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:15:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:33.287167 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:38.286925 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:38 localhost.localdomain microshift[132400]: kubelet I0213 05:15:38.664151 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:15:38 localhost.localdomain microshift[132400]: kubelet E0213 05:15:38.664383 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:15:39 localhost.localdomain microshift[132400]: kubelet E0213 05:15:39.534726 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:15:39 localhost.localdomain microshift[132400]: kubelet E0213 05:15:39.534758 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 05:15:39 localhost.localdomain microshift[132400]: kubelet I0213 05:15:39.664021 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:15:39 localhost.localdomain microshift[132400]: kubelet E0213 05:15:39.664443 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:15:41 localhost.localdomain microshift[132400]: kubelet I0213 05:15:41.663606 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:15:41 localhost.localdomain microshift[132400]: kubelet E0213 05:15:41.663931 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:15:42 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:15:42.083062 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:15:42 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:15:42.083094 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:15:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:43.286877 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:43 localhost.localdomain microshift[132400]: kubelet I0213 05:15:43.663644 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:15:43 localhost.localdomain microshift[132400]: kubelet E0213 05:15:43.663830 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:15:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:48.286545 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:50 localhost.localdomain microshift[132400]: kubelet I0213 05:15:50.663438 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:15:51 localhost.localdomain microshift[132400]: kubelet I0213 05:15:51.663626 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:15:51 localhost.localdomain microshift[132400]: kubelet E0213 05:15:51.664285 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:15:51 localhost.localdomain microshift[132400]: kubelet I0213 05:15:51.756338 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:54213e8cb29c3ea240bb6e1d4d4cc23c55590370c49a1d9dfcb77a0951d20643} Feb 13 05:15:51 localhost.localdomain microshift[132400]: kubelet I0213 05:15:51.756852 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 05:15:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:53.286777 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:54 localhost.localdomain microshift[132400]: kubelet I0213 05:15:54.665236 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:15:54 localhost.localdomain microshift[132400]: kubelet E0213 05:15:54.665580 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:15:56 localhost.localdomain microshift[132400]: kubelet I0213 05:15:56.664915 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:15:56 localhost.localdomain microshift[132400]: kubelet E0213 05:15:56.665074 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:15:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:15:58.286302 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:15:59 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:15:59.346134 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:15:59 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:15:59.346166 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:16:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:03.286958 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:03 localhost.localdomain microshift[132400]: kubelet I0213 05:16:03.347500 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:03 localhost.localdomain microshift[132400]: kubelet I0213 05:16:03.347810 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:04 localhost.localdomain microshift[132400]: kubelet I0213 05:16:04.664582 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:16:04 localhost.localdomain microshift[132400]: kubelet E0213 05:16:04.665668 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:16:06 localhost.localdomain microshift[132400]: kubelet I0213 05:16:06.348188 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:06 localhost.localdomain microshift[132400]: kubelet I0213 05:16:06.348233 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:08.286461 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:08 localhost.localdomain microshift[132400]: kubelet I0213 05:16:08.663939 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:16:08 localhost.localdomain microshift[132400]: kubelet E0213 05:16:08.664095 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:16:09 localhost.localdomain microshift[132400]: kubelet I0213 05:16:09.348592 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:09 localhost.localdomain microshift[132400]: kubelet I0213 05:16:09.349040 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:09 localhost.localdomain microshift[132400]: kubelet I0213 05:16:09.663923 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:16:09 localhost.localdomain microshift[132400]: kubelet E0213 05:16:09.664230 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:16:12 localhost.localdomain microshift[132400]: kubelet I0213 05:16:12.349324 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:12 localhost.localdomain microshift[132400]: kubelet I0213 05:16:12.349371 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:13.286538 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:15 localhost.localdomain microshift[132400]: kubelet I0213 05:16:15.349554 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:15 localhost.localdomain microshift[132400]: kubelet I0213 05:16:15.349893 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:18.286748 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:18 localhost.localdomain microshift[132400]: kubelet I0213 05:16:18.350856 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:18 localhost.localdomain microshift[132400]: kubelet I0213 05:16:18.351084 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:18 localhost.localdomain microshift[132400]: kubelet I0213 05:16:18.664068 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:16:18 localhost.localdomain microshift[132400]: kubelet E0213 05:16:18.665234 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:16:20 localhost.localdomain microshift[132400]: kubelet I0213 05:16:20.663968 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:16:20 localhost.localdomain microshift[132400]: kubelet E0213 05:16:20.664339 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:16:21 localhost.localdomain microshift[132400]: kubelet I0213 05:16:21.353393 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:21 localhost.localdomain microshift[132400]: kubelet I0213 05:16:21.353437 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:21 localhost.localdomain microshift[132400]: kubelet I0213 05:16:21.664156 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:16:21 localhost.localdomain microshift[132400]: kubelet E0213 05:16:21.664632 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:16:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:23.287284 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:24 localhost.localdomain microshift[132400]: kubelet I0213 05:16:24.353539 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:24 localhost.localdomain microshift[132400]: kubelet I0213 05:16:24.353899 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:27 localhost.localdomain microshift[132400]: kubelet I0213 05:16:27.354333 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:27 localhost.localdomain microshift[132400]: kubelet I0213 05:16:27.354765 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:27 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:16:27.962601 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:16:27 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:16:27.962799 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:16:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:28.286654 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:30 localhost.localdomain microshift[132400]: kubelet I0213 05:16:30.355874 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:30 localhost.localdomain microshift[132400]: kubelet I0213 05:16:30.356204 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:32 localhost.localdomain microshift[132400]: kubelet I0213 05:16:32.664695 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:16:32 localhost.localdomain microshift[132400]: kubelet E0213 05:16:32.665341 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:16:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:33.286668 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:33 localhost.localdomain microshift[132400]: kubelet I0213 05:16:33.357334 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:33 localhost.localdomain microshift[132400]: kubelet I0213 05:16:33.357384 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:34 localhost.localdomain microshift[132400]: kubelet I0213 05:16:34.663941 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:16:34 localhost.localdomain microshift[132400]: kubelet E0213 05:16:34.664549 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:16:35 localhost.localdomain microshift[132400]: kubelet I0213 05:16:35.668986 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:16:35 localhost.localdomain microshift[132400]: kubelet E0213 05:16:35.669439 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:16:36 localhost.localdomain microshift[132400]: kubelet I0213 05:16:36.357790 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:36 localhost.localdomain microshift[132400]: kubelet I0213 05:16:36.357834 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:38.287163 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:39 localhost.localdomain microshift[132400]: kubelet I0213 05:16:39.358934 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:39 localhost.localdomain microshift[132400]: kubelet I0213 05:16:39.359367 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:42 localhost.localdomain microshift[132400]: kubelet I0213 05:16:42.359737 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:42 localhost.localdomain microshift[132400]: kubelet I0213 05:16:42.360056 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:43.286715 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:45 localhost.localdomain microshift[132400]: kubelet I0213 05:16:45.361198 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:45 localhost.localdomain microshift[132400]: kubelet I0213 05:16:45.361259 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:46 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:16:46.309158 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:16:46 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:16:46.309183 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:16:47 localhost.localdomain microshift[132400]: kubelet I0213 05:16:47.663996 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:16:47 localhost.localdomain microshift[132400]: kubelet E0213 05:16:47.664784 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:16:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:48.286846 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:48 localhost.localdomain microshift[132400]: kubelet I0213 05:16:48.362362 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:48 localhost.localdomain microshift[132400]: kubelet I0213 05:16:48.362578 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:48 localhost.localdomain microshift[132400]: kubelet I0213 05:16:48.663882 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:16:48 localhost.localdomain microshift[132400]: kubelet E0213 05:16:48.664221 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:16:48 localhost.localdomain microshift[132400]: kubelet I0213 05:16:48.664744 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:16:48 localhost.localdomain microshift[132400]: kubelet E0213 05:16:48.665081 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:16:51 localhost.localdomain microshift[132400]: kubelet I0213 05:16:51.363446 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:51 localhost.localdomain microshift[132400]: kubelet I0213 05:16:51.363719 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:53.287260 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:54 localhost.localdomain microshift[132400]: kubelet I0213 05:16:54.364733 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:54 localhost.localdomain microshift[132400]: kubelet I0213 05:16:54.364784 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:57 localhost.localdomain microshift[132400]: kubelet I0213 05:16:57.365784 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:16:57 localhost.localdomain microshift[132400]: kubelet I0213 05:16:57.365836 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:16:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:16:58.286783 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:16:59 localhost.localdomain microshift[132400]: kubelet I0213 05:16:59.663703 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:16:59 localhost.localdomain microshift[132400]: kubelet E0213 05:16:59.664494 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:17:00 localhost.localdomain microshift[132400]: kubelet I0213 05:17:00.366833 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:00 localhost.localdomain microshift[132400]: kubelet I0213 05:17:00.366884 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:00 localhost.localdomain microshift[132400]: kubelet I0213 05:17:00.665375 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:17:00 localhost.localdomain microshift[132400]: kubelet E0213 05:17:00.666271 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:17:02 localhost.localdomain microshift[132400]: kubelet I0213 05:17:02.663383 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:17:02 localhost.localdomain microshift[132400]: kubelet E0213 05:17:02.663869 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:17:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:03.286999 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:17:03 localhost.localdomain microshift[132400]: kubelet I0213 05:17:03.367995 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:03 localhost.localdomain microshift[132400]: kubelet I0213 05:17:03.368046 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:04 localhost.localdomain microshift[132400]: kubelet I0213 05:17:04.631532 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:04 localhost.localdomain microshift[132400]: kubelet I0213 05:17:04.631584 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:06 localhost.localdomain microshift[132400]: kubelet I0213 05:17:06.368325 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:06 localhost.localdomain microshift[132400]: kubelet I0213 05:17:06.368367 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:08 localhost.localdomain microshift[132400]: kubelet I0213 05:17:08.223042 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:17:08 localhost.localdomain microshift[132400]: kubelet E0213 05:17:08.223151 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:19:10.223139526 -0500 EST m=+4437.403485798 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:17:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:08.286855 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:17:09 localhost.localdomain microshift[132400]: kubelet I0213 05:17:09.369254 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:09 localhost.localdomain microshift[132400]: kubelet I0213 05:17:09.369307 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:12 localhost.localdomain microshift[132400]: kubelet I0213 05:17:12.370175 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:12 localhost.localdomain microshift[132400]: kubelet I0213 05:17:12.370234 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:13.286307 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:17:13 localhost.localdomain microshift[132400]: kubelet I0213 05:17:13.663815 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:17:13 localhost.localdomain microshift[132400]: kubelet E0213 05:17:13.664599 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:17:14 localhost.localdomain microshift[132400]: kubelet I0213 05:17:14.632221 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:14 localhost.localdomain microshift[132400]: kubelet I0213 05:17:14.632283 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:14 localhost.localdomain microshift[132400]: kubelet I0213 05:17:14.665200 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:17:14 localhost.localdomain microshift[132400]: kubelet E0213 05:17:14.665527 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:17:14 localhost.localdomain microshift[132400]: kubelet I0213 05:17:14.665951 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:17:14 localhost.localdomain microshift[132400]: kubelet I0213 05:17:14.895020 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf} Feb 13 05:17:15 localhost.localdomain microshift[132400]: kubelet I0213 05:17:15.370425 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:15 localhost.localdomain microshift[132400]: kubelet I0213 05:17:15.370634 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:18.286890 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:17:18 localhost.localdomain microshift[132400]: kubelet I0213 05:17:18.371305 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:18 localhost.localdomain microshift[132400]: kubelet I0213 05:17:18.371707 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:21 localhost.localdomain microshift[132400]: kubelet I0213 05:17:21.372783 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:21 localhost.localdomain microshift[132400]: kubelet I0213 05:17:21.372839 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:23.286414 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:17:24 localhost.localdomain microshift[132400]: kubelet I0213 05:17:24.373927 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:24 localhost.localdomain microshift[132400]: kubelet I0213 05:17:24.373974 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:24 localhost.localdomain microshift[132400]: kubelet I0213 05:17:24.631621 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:24 localhost.localdomain microshift[132400]: kubelet I0213 05:17:24.631909 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:26 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:17:26.971486 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:17:26 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:17:26.971514 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:17:27 localhost.localdomain microshift[132400]: kubelet I0213 05:17:27.374088 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:27 localhost.localdomain microshift[132400]: kubelet I0213 05:17:27.374139 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:27 localhost.localdomain microshift[132400]: kubelet I0213 05:17:27.663712 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:17:27 localhost.localdomain microshift[132400]: kubelet I0213 05:17:27.663919 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:17:27 localhost.localdomain microshift[132400]: kubelet E0213 05:17:27.664187 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:17:27 localhost.localdomain microshift[132400]: kubelet E0213 05:17:27.664224 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:17:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:28.287031 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:17:30 localhost.localdomain microshift[132400]: kubelet I0213 05:17:30.375094 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:30 localhost.localdomain microshift[132400]: kubelet I0213 05:17:30.375560 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:33.286889 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:17:33 localhost.localdomain microshift[132400]: kubelet I0213 05:17:33.375789 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:33 localhost.localdomain microshift[132400]: kubelet I0213 05:17:33.375832 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:34 localhost.localdomain microshift[132400]: kubelet I0213 05:17:34.631186 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:34 localhost.localdomain microshift[132400]: kubelet I0213 05:17:34.631239 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:36 localhost.localdomain microshift[132400]: kubelet I0213 05:17:36.376210 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:36 localhost.localdomain microshift[132400]: kubelet I0213 05:17:36.376565 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:38 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:17:38.003109 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:17:38 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:17:38.003130 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:17:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:38.286770 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:17:39 localhost.localdomain microshift[132400]: kubelet I0213 05:17:39.377135 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:39 localhost.localdomain microshift[132400]: kubelet I0213 05:17:39.377186 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:41 localhost.localdomain microshift[132400]: kubelet I0213 05:17:41.664228 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:17:41 localhost.localdomain microshift[132400]: kubelet E0213 05:17:41.664645 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:17:41 localhost.localdomain microshift[132400]: kubelet I0213 05:17:41.664904 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:17:41 localhost.localdomain microshift[132400]: kubelet E0213 05:17:41.665214 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:17:42 localhost.localdomain microshift[132400]: kubelet I0213 05:17:42.377602 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:42 localhost.localdomain microshift[132400]: kubelet I0213 05:17:42.377678 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:42 localhost.localdomain microshift[132400]: kubelet E0213 05:17:42.727969 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:17:42 localhost.localdomain microshift[132400]: kubelet E0213 05:17:42.728750 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 05:17:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:43.287221 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:17:44 localhost.localdomain microshift[132400]: kubelet I0213 05:17:44.631984 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:44 localhost.localdomain microshift[132400]: kubelet I0213 05:17:44.632045 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:44 localhost.localdomain microshift[132400]: kubelet I0213 05:17:44.632071 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 05:17:44 localhost.localdomain microshift[132400]: kubelet I0213 05:17:44.632382 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:54213e8cb29c3ea240bb6e1d4d4cc23c55590370c49a1d9dfcb77a0951d20643} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 05:17:44 localhost.localdomain microshift[132400]: kubelet I0213 05:17:44.632465 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://54213e8cb29c3ea240bb6e1d4d4cc23c55590370c49a1d9dfcb77a0951d20643" gracePeriod=30 Feb 13 05:17:45 localhost.localdomain microshift[132400]: kubelet I0213 05:17:45.378091 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:45 localhost.localdomain microshift[132400]: kubelet I0213 05:17:45.378145 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:48.286377 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:17:48 localhost.localdomain microshift[132400]: kubelet I0213 05:17:48.379172 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:48 localhost.localdomain microshift[132400]: kubelet I0213 05:17:48.379439 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:49 localhost.localdomain microshift[132400]: kubelet I0213 05:17:49.952763 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" exitCode=255 Feb 13 05:17:49 localhost.localdomain microshift[132400]: kubelet I0213 05:17:49.952792 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf} Feb 13 05:17:49 localhost.localdomain microshift[132400]: kubelet I0213 05:17:49.952818 132400 scope.go:115] "RemoveContainer" containerID="1d7a2b97144b48a01ee809a0e7411193fbbcc815962d358f11b6386d373dc2b6" Feb 13 05:17:49 localhost.localdomain microshift[132400]: kubelet I0213 05:17:49.953081 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:17:49 localhost.localdomain microshift[132400]: kubelet E0213 05:17:49.953372 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:17:51 localhost.localdomain microshift[132400]: kubelet I0213 05:17:51.379945 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:51 localhost.localdomain microshift[132400]: kubelet I0213 05:17:51.380000 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:52 localhost.localdomain microshift[132400]: kubelet I0213 05:17:52.663736 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:17:52 localhost.localdomain microshift[132400]: kubelet E0213 05:17:52.664346 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:17:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:53.287425 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:17:54 localhost.localdomain microshift[132400]: kubelet I0213 05:17:54.380914 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:54 localhost.localdomain microshift[132400]: kubelet I0213 05:17:54.381243 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:54 localhost.localdomain microshift[132400]: kubelet I0213 05:17:54.664062 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:17:54 localhost.localdomain microshift[132400]: kubelet E0213 05:17:54.664406 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:17:57 localhost.localdomain microshift[132400]: kubelet I0213 05:17:57.382540 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:17:57 localhost.localdomain microshift[132400]: kubelet I0213 05:17:57.383186 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:17:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:17:58.286766 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:00 localhost.localdomain microshift[132400]: kubelet I0213 05:18:00.383591 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:00 localhost.localdomain microshift[132400]: kubelet I0213 05:18:00.383695 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:01 localhost.localdomain microshift[132400]: kubelet I0213 05:18:01.664888 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:18:01 localhost.localdomain microshift[132400]: kubelet E0213 05:18:01.665065 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:18:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:03.286854 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:03 localhost.localdomain microshift[132400]: kubelet I0213 05:18:03.384828 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:03 localhost.localdomain microshift[132400]: kubelet I0213 05:18:03.384901 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:04 localhost.localdomain microshift[132400]: kubelet I0213 05:18:04.977897 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="54213e8cb29c3ea240bb6e1d4d4cc23c55590370c49a1d9dfcb77a0951d20643" exitCode=0 Feb 13 05:18:04 localhost.localdomain microshift[132400]: kubelet I0213 05:18:04.977944 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:54213e8cb29c3ea240bb6e1d4d4cc23c55590370c49a1d9dfcb77a0951d20643} Feb 13 05:18:04 localhost.localdomain microshift[132400]: kubelet I0213 05:18:04.977968 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerStarted Data:267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e} Feb 13 05:18:04 localhost.localdomain microshift[132400]: kubelet I0213 05:18:04.977990 132400 scope.go:115] "RemoveContainer" containerID="113aff6cb42a53054fff1c8ae63de1b3ce89d4a62bde3805d938024b0c6628cf" Feb 13 05:18:05 localhost.localdomain microshift[132400]: kubelet I0213 05:18:05.663869 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:18:05 localhost.localdomain microshift[132400]: kubelet E0213 05:18:05.664282 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:18:05 localhost.localdomain microshift[132400]: kubelet I0213 05:18:05.664450 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:18:05 localhost.localdomain microshift[132400]: kubelet E0213 05:18:05.664692 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:18:05 localhost.localdomain microshift[132400]: kubelet I0213 05:18:05.981011 132400 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 05:18:06 localhost.localdomain microshift[132400]: kubelet I0213 05:18:06.385816 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:06 localhost.localdomain microshift[132400]: kubelet I0213 05:18:06.385890 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:06 localhost.localdomain microshift[132400]: kubelet I0213 05:18:06.385926 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z4v2p" Feb 13 05:18:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:08.287230 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:12 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:18:12.582261 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:18:12 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:18:12.582288 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:18:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:13.286830 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:15 localhost.localdomain microshift[132400]: kubelet I0213 05:18:15.665889 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:18:15 localhost.localdomain microshift[132400]: kubelet E0213 05:18:15.666073 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:18:16 localhost.localdomain microshift[132400]: kubelet I0213 05:18:16.665336 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:18:16 localhost.localdomain microshift[132400]: kubelet E0213 05:18:16.665647 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:18:16 localhost.localdomain microshift[132400]: kubelet I0213 05:18:16.665689 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:18:16 localhost.localdomain microshift[132400]: kubelet E0213 05:18:16.665991 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:18:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:18.286825 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:18 localhost.localdomain microshift[132400]: kubelet I0213 05:18:18.346452 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:18 localhost.localdomain microshift[132400]: kubelet I0213 05:18:18.346498 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:21 localhost.localdomain microshift[132400]: kubelet I0213 05:18:21.346835 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:21 localhost.localdomain microshift[132400]: kubelet I0213 05:18:21.346871 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:23.286231 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:24 localhost.localdomain microshift[132400]: kubelet I0213 05:18:24.347676 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:24 localhost.localdomain microshift[132400]: kubelet I0213 05:18:24.347722 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:27 localhost.localdomain microshift[132400]: kubelet I0213 05:18:27.348266 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:27 localhost.localdomain microshift[132400]: kubelet I0213 05:18:27.348759 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:28.286698 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:29 localhost.localdomain microshift[132400]: kubelet I0213 05:18:29.663499 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:18:29 localhost.localdomain microshift[132400]: kubelet E0213 05:18:29.664086 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:18:30 localhost.localdomain microshift[132400]: kubelet I0213 05:18:30.349132 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:30 localhost.localdomain microshift[132400]: kubelet I0213 05:18:30.349182 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:30 localhost.localdomain microshift[132400]: kubelet I0213 05:18:30.664064 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:18:30 localhost.localdomain microshift[132400]: kubelet E0213 05:18:30.664862 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:18:31 localhost.localdomain microshift[132400]: kubelet I0213 05:18:31.663846 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:18:31 localhost.localdomain microshift[132400]: kubelet E0213 05:18:31.664290 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:18:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:33.286839 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:33 localhost.localdomain microshift[132400]: kubelet I0213 05:18:33.349859 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:33 localhost.localdomain microshift[132400]: kubelet I0213 05:18:33.349910 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:36 localhost.localdomain microshift[132400]: kubelet I0213 05:18:36.351553 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:36 localhost.localdomain microshift[132400]: kubelet I0213 05:18:36.351592 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:37 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:18:37.893196 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:18:37 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:18:37.893247 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:18:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:38.286722 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:39 localhost.localdomain microshift[132400]: kubelet I0213 05:18:39.352633 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:39 localhost.localdomain microshift[132400]: kubelet I0213 05:18:39.352710 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:41 localhost.localdomain microshift[132400]: kubelet I0213 05:18:41.663753 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:18:41 localhost.localdomain microshift[132400]: kubelet E0213 05:18:41.663916 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:18:41 localhost.localdomain microshift[132400]: kubelet I0213 05:18:41.663945 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:18:41 localhost.localdomain microshift[132400]: kubelet E0213 05:18:41.664167 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:18:42 localhost.localdomain microshift[132400]: kubelet I0213 05:18:42.352849 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:42 localhost.localdomain microshift[132400]: kubelet I0213 05:18:42.352905 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:43.287067 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:44 localhost.localdomain microshift[132400]: kubelet I0213 05:18:44.664583 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:18:44 localhost.localdomain microshift[132400]: kubelet E0213 05:18:44.665055 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:18:45 localhost.localdomain microshift[132400]: kubelet I0213 05:18:45.353542 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:45 localhost.localdomain microshift[132400]: kubelet I0213 05:18:45.353798 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:48.286610 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:48 localhost.localdomain microshift[132400]: kubelet I0213 05:18:48.354479 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:48 localhost.localdomain microshift[132400]: kubelet I0213 05:18:48.354563 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:51 localhost.localdomain microshift[132400]: kubelet I0213 05:18:51.354800 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:51 localhost.localdomain microshift[132400]: kubelet I0213 05:18:51.355119 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:52 localhost.localdomain microshift[132400]: kubelet I0213 05:18:52.663949 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:18:52 localhost.localdomain microshift[132400]: kubelet E0213 05:18:52.664238 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:18:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:53.286583 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:18:54 localhost.localdomain microshift[132400]: kubelet I0213 05:18:54.355791 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:54 localhost.localdomain microshift[132400]: kubelet I0213 05:18:54.355830 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:55 localhost.localdomain microshift[132400]: kubelet I0213 05:18:55.665231 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:18:55 localhost.localdomain microshift[132400]: kubelet E0213 05:18:55.665524 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:18:55 localhost.localdomain microshift[132400]: kubelet I0213 05:18:55.665769 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:18:55 localhost.localdomain microshift[132400]: kubelet E0213 05:18:55.665870 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:18:57 localhost.localdomain microshift[132400]: kubelet I0213 05:18:57.356119 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:18:57 localhost.localdomain microshift[132400]: kubelet I0213 05:18:57.356229 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:18:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:18:58.286437 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:00 localhost.localdomain microshift[132400]: kubelet I0213 05:19:00.356387 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:00 localhost.localdomain microshift[132400]: kubelet I0213 05:19:00.356713 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:03.286427 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:03 localhost.localdomain microshift[132400]: kubelet I0213 05:19:03.357096 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:03 localhost.localdomain microshift[132400]: kubelet I0213 05:19:03.357153 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:05 localhost.localdomain microshift[132400]: kubelet I0213 05:19:05.664925 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:19:06 localhost.localdomain microshift[132400]: kubelet I0213 05:19:06.069533 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerStarted Data:285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49} Feb 13 05:19:06 localhost.localdomain microshift[132400]: kubelet I0213 05:19:06.357557 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:06 localhost.localdomain microshift[132400]: kubelet I0213 05:19:06.357595 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:07 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:19:07.128723 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:19:07 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:19:07.128747 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:19:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:08.286873 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:09 localhost.localdomain microshift[132400]: kubelet I0213 05:19:09.076001 132400 generic.go:332] "Generic (PLEG): container finished" podID=763e920a-b594-4485-bf77-dfed5dddbf03 containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" exitCode=1 Feb 13 05:19:09 localhost.localdomain microshift[132400]: kubelet I0213 05:19:09.076035 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-node-9bnp5" event=&{ID:763e920a-b594-4485-bf77-dfed5dddbf03 Type:ContainerDied Data:285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49} Feb 13 05:19:09 localhost.localdomain microshift[132400]: kubelet I0213 05:19:09.076059 132400 scope.go:115] "RemoveContainer" containerID="a58d5e5b25a20ece191c894271c639053cbe6647279bfdb8f3ee0ad37daa90ea" Feb 13 05:19:09 localhost.localdomain microshift[132400]: kubelet I0213 05:19:09.076386 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:19:09 localhost.localdomain microshift[132400]: kubelet E0213 05:19:09.076676 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:19:09 localhost.localdomain microshift[132400]: kubelet I0213 05:19:09.358315 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:09 localhost.localdomain microshift[132400]: kubelet I0213 05:19:09.358650 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:09 localhost.localdomain microshift[132400]: kubelet I0213 05:19:09.663533 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:19:09 localhost.localdomain microshift[132400]: kubelet E0213 05:19:09.663721 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:19:09 localhost.localdomain microshift[132400]: kubelet I0213 05:19:09.663821 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:19:09 localhost.localdomain microshift[132400]: kubelet E0213 05:19:09.664156 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:19:10 localhost.localdomain microshift[132400]: kubelet I0213 05:19:10.245460 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:19:10 localhost.localdomain microshift[132400]: kubelet E0213 05:19:10.245783 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:21:12.245771688 -0500 EST m=+4559.426117956 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:19:12 localhost.localdomain microshift[132400]: kubelet I0213 05:19:12.359304 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:12 localhost.localdomain microshift[132400]: kubelet I0213 05:19:12.359812 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:13.287207 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:14 localhost.localdomain microshift[132400]: kubelet I0213 05:19:14.632283 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:14 localhost.localdomain microshift[132400]: kubelet I0213 05:19:14.632570 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:15 localhost.localdomain microshift[132400]: kubelet I0213 05:19:15.360796 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:15 localhost.localdomain microshift[132400]: kubelet I0213 05:19:15.361021 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:18.286403 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:18 localhost.localdomain microshift[132400]: kubelet I0213 05:19:18.361682 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:18 localhost.localdomain microshift[132400]: kubelet I0213 05:19:18.361740 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:20 localhost.localdomain microshift[132400]: kubelet I0213 05:19:20.902058 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-node-9bnp5" Feb 13 05:19:20 localhost.localdomain microshift[132400]: kubelet I0213 05:19:20.903321 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:19:20 localhost.localdomain microshift[132400]: kubelet E0213 05:19:20.904718 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:19:21 localhost.localdomain microshift[132400]: kubelet I0213 05:19:21.362394 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:21 localhost.localdomain microshift[132400]: kubelet I0213 05:19:21.362483 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:23.287287 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:23 localhost.localdomain microshift[132400]: kubelet I0213 05:19:23.663469 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:19:23 localhost.localdomain microshift[132400]: kubelet I0213 05:19:23.663796 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:19:23 localhost.localdomain microshift[132400]: kubelet E0213 05:19:23.663827 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:19:24 localhost.localdomain microshift[132400]: kubelet I0213 05:19:24.100814 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerStarted Data:f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a} Feb 13 05:19:24 localhost.localdomain microshift[132400]: kubelet I0213 05:19:24.101561 132400 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 05:19:24 localhost.localdomain microshift[132400]: kubelet I0213 05:19:24.362762 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:24 localhost.localdomain microshift[132400]: kubelet I0213 05:19:24.363151 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:24 localhost.localdomain microshift[132400]: kubelet I0213 05:19:24.631915 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:24 localhost.localdomain microshift[132400]: kubelet I0213 05:19:24.632125 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:25 localhost.localdomain microshift[132400]: kubelet I0213 05:19:25.101590 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:25 localhost.localdomain microshift[132400]: kubelet I0213 05:19:25.101633 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:26 localhost.localdomain microshift[132400]: kubelet I0213 05:19:26.103558 132400 patch_prober.go:28] interesting pod/topolvm-controller-78cbfc4867-qdfs4 container/topolvm-controller namespace/openshift-storage: Readiness probe status=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:26 localhost.localdomain microshift[132400]: kubelet I0213 05:19:26.103611 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerName="topolvm-controller" probeResult=failure output="Get \"http://10.42.0.6:8080/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:27 localhost.localdomain microshift[132400]: kubelet I0213 05:19:27.107364 132400 generic.go:332] "Generic (PLEG): container finished" podID=9744aca6-9463-42d2-a05e-f1e3af7b175e containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" exitCode=1 Feb 13 05:19:27 localhost.localdomain microshift[132400]: kubelet I0213 05:19:27.107394 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" event=&{ID:9744aca6-9463-42d2-a05e-f1e3af7b175e Type:ContainerDied Data:f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a} Feb 13 05:19:27 localhost.localdomain microshift[132400]: kubelet I0213 05:19:27.107414 132400 scope.go:115] "RemoveContainer" containerID="403a82892c942b5ee942afc97c27862283b5feab7db49b8a85160e9cf9603a3a" Feb 13 05:19:27 localhost.localdomain microshift[132400]: kubelet I0213 05:19:27.107648 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:19:27 localhost.localdomain microshift[132400]: kubelet E0213 05:19:27.107939 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:19:27 localhost.localdomain microshift[132400]: kubelet I0213 05:19:27.364131 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:27 localhost.localdomain microshift[132400]: kubelet I0213 05:19:27.364176 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:28.286897 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:30 localhost.localdomain microshift[132400]: kubelet I0213 05:19:30.364539 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:30 localhost.localdomain microshift[132400]: kubelet I0213 05:19:30.364601 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:31 localhost.localdomain microshift[132400]: kubelet I0213 05:19:31.664259 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:19:31 localhost.localdomain microshift[132400]: kubelet E0213 05:19:31.664879 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:19:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:33.287070 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:33 localhost.localdomain microshift[132400]: kubelet I0213 05:19:33.364980 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:33 localhost.localdomain microshift[132400]: kubelet I0213 05:19:33.365197 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:34 localhost.localdomain microshift[132400]: kubelet I0213 05:19:34.632766 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:34 localhost.localdomain microshift[132400]: kubelet I0213 05:19:34.632813 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:36 localhost.localdomain microshift[132400]: kubelet I0213 05:19:36.366224 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:36 localhost.localdomain microshift[132400]: kubelet I0213 05:19:36.366258 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:37 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:19:37.188208 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:19:37 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:19:37.188232 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:19:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:38.287314 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:38 localhost.localdomain microshift[132400]: kubelet I0213 05:19:38.663885 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:19:38 localhost.localdomain microshift[132400]: kubelet E0213 05:19:38.664218 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:19:39 localhost.localdomain microshift[132400]: kubelet I0213 05:19:39.367264 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:39 localhost.localdomain microshift[132400]: kubelet I0213 05:19:39.367873 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:41 localhost.localdomain microshift[132400]: kubelet I0213 05:19:41.663346 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:19:41 localhost.localdomain microshift[132400]: kubelet E0213 05:19:41.664428 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:19:42 localhost.localdomain microshift[132400]: kubelet I0213 05:19:42.368832 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:42 localhost.localdomain microshift[132400]: kubelet I0213 05:19:42.369032 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:43.286299 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:43 localhost.localdomain microshift[132400]: kubelet I0213 05:19:43.663738 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:19:43 localhost.localdomain microshift[132400]: kubelet E0213 05:19:43.664193 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:19:44 localhost.localdomain microshift[132400]: kubelet I0213 05:19:44.632082 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:44 localhost.localdomain microshift[132400]: kubelet I0213 05:19:44.632493 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:45 localhost.localdomain microshift[132400]: kubelet I0213 05:19:45.369332 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:45 localhost.localdomain microshift[132400]: kubelet I0213 05:19:45.369610 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:45 localhost.localdomain microshift[132400]: kubelet E0213 05:19:45.942064 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:19:45 localhost.localdomain microshift[132400]: kubelet E0213 05:19:45.942094 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 05:19:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:48.286913 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:48 localhost.localdomain microshift[132400]: kubelet I0213 05:19:48.369863 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:48 localhost.localdomain microshift[132400]: kubelet I0213 05:19:48.370110 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:49 localhost.localdomain microshift[132400]: kubelet I0213 05:19:49.663826 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:19:49 localhost.localdomain microshift[132400]: kubelet E0213 05:19:49.664342 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:19:51 localhost.localdomain microshift[132400]: kubelet I0213 05:19:51.370248 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:51 localhost.localdomain microshift[132400]: kubelet I0213 05:19:51.370285 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:53.287152 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:19:54 localhost.localdomain microshift[132400]: kubelet I0213 05:19:54.371044 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:54 localhost.localdomain microshift[132400]: kubelet I0213 05:19:54.371398 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:54 localhost.localdomain microshift[132400]: kubelet I0213 05:19:54.632392 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.42.0.7:8080/health\": dial tcp 10.42.0.7:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:54 localhost.localdomain microshift[132400]: kubelet I0213 05:19:54.632431 132400 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8080/health\": dial tcp 10.42.0.7:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:54 localhost.localdomain microshift[132400]: kubelet I0213 05:19:54.632458 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-dns/dns-default-z4v2p" Feb 13 05:19:54 localhost.localdomain microshift[132400]: kubelet I0213 05:19:54.632835 132400 kuberuntime_manager.go:659] "Message for Container of pod" containerName="dns" containerStatusID={Type:cri-o ID:267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e} pod="openshift-dns/dns-default-z4v2p" containerMessage="Container dns failed liveness probe, will be restarted" Feb 13 05:19:54 localhost.localdomain microshift[132400]: kubelet I0213 05:19:54.632922 132400 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" containerID="cri-o://267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" gracePeriod=30 Feb 13 05:19:56 localhost.localdomain microshift[132400]: kubelet I0213 05:19:56.665626 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:19:56 localhost.localdomain microshift[132400]: kubelet E0213 05:19:56.666361 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:19:57 localhost.localdomain microshift[132400]: kubelet I0213 05:19:57.371738 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:19:57 localhost.localdomain microshift[132400]: kubelet I0213 05:19:57.371783 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:19:57 localhost.localdomain microshift[132400]: kubelet I0213 05:19:57.663709 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:19:57 localhost.localdomain microshift[132400]: kubelet E0213 05:19:57.664253 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:19:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:19:58.287039 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:00 localhost.localdomain microshift[132400]: kubelet I0213 05:20:00.372901 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:20:00 localhost.localdomain microshift[132400]: kubelet I0213 05:20:00.373249 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:20:00 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:20:00.720844 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:20:00 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:20:00.721001 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:20:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:03.287005 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:03 localhost.localdomain microshift[132400]: kubelet I0213 05:20:03.374302 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:20:03 localhost.localdomain microshift[132400]: kubelet I0213 05:20:03.374472 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:20:04 localhost.localdomain microshift[132400]: kubelet I0213 05:20:04.664275 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:20:04 localhost.localdomain microshift[132400]: kubelet E0213 05:20:04.664821 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:20:06 localhost.localdomain microshift[132400]: kubelet I0213 05:20:06.374759 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:20:06 localhost.localdomain microshift[132400]: kubelet I0213 05:20:06.374795 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:20:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:08.286790 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:08 localhost.localdomain microshift[132400]: kubelet I0213 05:20:08.666514 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:20:08 localhost.localdomain microshift[132400]: kubelet E0213 05:20:08.668122 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:20:09 localhost.localdomain microshift[132400]: kubelet I0213 05:20:09.375022 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:20:09 localhost.localdomain microshift[132400]: kubelet I0213 05:20:09.375559 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:20:09 localhost.localdomain microshift[132400]: kubelet I0213 05:20:09.664392 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:20:09 localhost.localdomain microshift[132400]: kubelet E0213 05:20:09.665307 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:20:12 localhost.localdomain microshift[132400]: kubelet I0213 05:20:12.375957 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:20:12 localhost.localdomain microshift[132400]: kubelet I0213 05:20:12.376365 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:20:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:13.286755 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:14 localhost.localdomain microshift[132400]: kubelet E0213 05:20:14.739431 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:20:15 localhost.localdomain microshift[132400]: kubelet I0213 05:20:15.183315 132400 generic.go:332] "Generic (PLEG): container finished" podID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" exitCode=0 Feb 13 05:20:15 localhost.localdomain microshift[132400]: kubelet I0213 05:20:15.183347 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z4v2p" event=&{ID:d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Type:ContainerDied Data:267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e} Feb 13 05:20:15 localhost.localdomain microshift[132400]: kubelet I0213 05:20:15.183372 132400 scope.go:115] "RemoveContainer" containerID="54213e8cb29c3ea240bb6e1d4d4cc23c55590370c49a1d9dfcb77a0951d20643" Feb 13 05:20:15 localhost.localdomain microshift[132400]: kubelet I0213 05:20:15.183595 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:20:15 localhost.localdomain microshift[132400]: kubelet E0213 05:20:15.183869 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:20:15 localhost.localdomain microshift[132400]: kubelet I0213 05:20:15.377175 132400 patch_prober.go:28] interesting pod/dns-default-z4v2p container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 13 05:20:15 localhost.localdomain microshift[132400]: kubelet I0213 05:20:15.377234 132400 prober.go:109] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 containerName="dns" probeResult=failure output="Get \"http://10.42.0.7:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 05:20:16 localhost.localdomain microshift[132400]: kubelet I0213 05:20:16.663701 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:20:16 localhost.localdomain microshift[132400]: kubelet E0213 05:20:16.663866 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:20:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:18.287006 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:22 localhost.localdomain microshift[132400]: kubelet I0213 05:20:22.664108 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:20:22 localhost.localdomain microshift[132400]: kubelet E0213 05:20:22.664568 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:20:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:23.287043 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:24 localhost.localdomain microshift[132400]: kubelet I0213 05:20:24.664018 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:20:24 localhost.localdomain microshift[132400]: kubelet E0213 05:20:24.664319 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:20:26 localhost.localdomain microshift[132400]: kubelet I0213 05:20:26.192525 132400 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" Feb 13 05:20:26 localhost.localdomain microshift[132400]: kubelet I0213 05:20:26.193162 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:20:26 localhost.localdomain microshift[132400]: kubelet E0213 05:20:26.193563 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:20:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:28.287242 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:29 localhost.localdomain microshift[132400]: kubelet I0213 05:20:29.664169 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:20:29 localhost.localdomain microshift[132400]: kubelet I0213 05:20:29.664534 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:20:29 localhost.localdomain microshift[132400]: kubelet E0213 05:20:29.664635 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:20:29 localhost.localdomain microshift[132400]: kubelet E0213 05:20:29.664825 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:20:31 localhost.localdomain microshift[132400]: kubelet I0213 05:20:31.741487 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/dns/21.log" Feb 13 05:20:31 localhost.localdomain microshift[132400]: kubelet I0213 05:20:31.743343 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z4v2p_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7/kube-rbac-proxy/3.log" Feb 13 05:20:31 localhost.localdomain microshift[132400]: kubelet I0213 05:20:31.785041 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sgsm4_c608b4f5-e1d8-4927-9659-5771e2bd21ac/dns-node-resolver/3.log" Feb 13 05:20:31 localhost.localdomain microshift[132400]: kubelet I0213 05:20:31.829102 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-85d64c4987-bbdnr_41b0089d-73d0-450a-84f5-8bfec82d97f9/router/2.log" Feb 13 05:20:31 localhost.localdomain microshift[132400]: kubelet I0213 05:20:31.876009 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/northd/3.log" Feb 13 05:20:31 localhost.localdomain microshift[132400]: kubelet I0213 05:20:31.880264 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/nbdb/4.log" Feb 13 05:20:31 localhost.localdomain microshift[132400]: kubelet I0213 05:20:31.884642 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/sbdb/3.log" Feb 13 05:20:31 localhost.localdomain microshift[132400]: kubelet I0213 05:20:31.891891 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-86mcc_0212b1a0-7d9a-4e7e-9ee4-d6d43dcaf9fc/ovnkube-master/3.log" Feb 13 05:20:31 localhost.localdomain microshift[132400]: kubelet I0213 05:20:31.946357 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6gpbh_0390852d-4e2a-4c00-9b0f-cbf1945008a2/ovn-controller/3.log" Feb 13 05:20:32 localhost.localdomain microshift[132400]: kubelet I0213 05:20:32.003223 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-7bd9547b57-vhmkf_2e7bce65-b199-4d8a-bc2f-c63494419251/service-ca-controller/19.log" Feb 13 05:20:32 localhost.localdomain microshift[132400]: kubelet I0213 05:20:32.056121 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/self-signed-cert-generator/2.log" Feb 13 05:20:32 localhost.localdomain microshift[132400]: kubelet I0213 05:20:32.057896 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/topolvm-controller/21.log" Feb 13 05:20:32 localhost.localdomain microshift[132400]: kubelet I0213 05:20:32.062194 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/csi-provisioner/3.log" Feb 13 05:20:32 localhost.localdomain microshift[132400]: kubelet I0213 05:20:32.066514 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/csi-resizer/3.log" Feb 13 05:20:32 localhost.localdomain microshift[132400]: kubelet I0213 05:20:32.071636 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-controller-78cbfc4867-qdfs4_9744aca6-9463-42d2-a05e-f1e3af7b175e/liveness-probe/3.log" Feb 13 05:20:32 localhost.localdomain microshift[132400]: kubelet I0213 05:20:32.115803 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/lvmd/3.log" Feb 13 05:20:32 localhost.localdomain microshift[132400]: kubelet I0213 05:20:32.117438 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/topolvm-node/21.log" Feb 13 05:20:32 localhost.localdomain microshift[132400]: kubelet I0213 05:20:32.121486 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/csi-registrar/3.log" Feb 13 05:20:32 localhost.localdomain microshift[132400]: kubelet I0213 05:20:32.125422 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/liveness-probe/3.log" Feb 13 05:20:32 localhost.localdomain microshift[132400]: kubelet I0213 05:20:32.126954 132400 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-storage_topolvm-node-9bnp5_763e920a-b594-4485-bf77-dfed5dddbf03/file-checker/2.log" Feb 13 05:20:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:33.286890 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:33 localhost.localdomain microshift[132400]: kubelet I0213 05:20:33.664098 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:20:33 localhost.localdomain microshift[132400]: kubelet E0213 05:20:33.664352 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:20:36 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:20:36.146775 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:20:36 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:20:36.147067 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:20:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:38.286804 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:38 localhost.localdomain microshift[132400]: kubelet I0213 05:20:38.664752 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:20:38 localhost.localdomain microshift[132400]: kubelet E0213 05:20:38.665197 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:20:40 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:20:40.476771 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:20:40 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:20:40.477424 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:20:40 localhost.localdomain microshift[132400]: kubelet I0213 05:20:40.664137 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:20:40 localhost.localdomain microshift[132400]: kubelet E0213 05:20:40.664323 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:20:40 localhost.localdomain microshift[132400]: kubelet I0213 05:20:40.664532 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:20:40 localhost.localdomain microshift[132400]: kubelet E0213 05:20:40.664729 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:20:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:43.286843 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:45 localhost.localdomain microshift[132400]: kubelet I0213 05:20:45.671817 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:20:45 localhost.localdomain microshift[132400]: kubelet E0213 05:20:45.672315 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:20:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:48.286239 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:52 localhost.localdomain microshift[132400]: kubelet I0213 05:20:52.665007 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:20:52 localhost.localdomain microshift[132400]: kubelet E0213 05:20:52.665198 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:20:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:53.286811 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:20:53 localhost.localdomain microshift[132400]: kubelet I0213 05:20:53.663670 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:20:53 localhost.localdomain microshift[132400]: kubelet E0213 05:20:53.663968 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:20:54 localhost.localdomain microshift[132400]: kubelet I0213 05:20:54.664277 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:20:54 localhost.localdomain microshift[132400]: kubelet E0213 05:20:54.664519 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:20:56 localhost.localdomain microshift[132400]: kubelet I0213 05:20:56.664488 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:20:56 localhost.localdomain microshift[132400]: kubelet E0213 05:20:56.665020 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:20:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:20:58.286552 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:03.286888 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:05 localhost.localdomain microshift[132400]: kubelet I0213 05:21:05.668402 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:21:05 localhost.localdomain microshift[132400]: kubelet E0213 05:21:05.668829 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:21:05 localhost.localdomain microshift[132400]: kubelet I0213 05:21:05.668949 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:21:05 localhost.localdomain microshift[132400]: kubelet E0213 05:21:05.669361 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:21:06 localhost.localdomain microshift[132400]: kubelet I0213 05:21:06.664489 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:21:06 localhost.localdomain microshift[132400]: kubelet E0213 05:21:06.664733 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:21:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:08.286421 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:08 localhost.localdomain microshift[132400]: kubelet I0213 05:21:08.665180 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:21:08 localhost.localdomain microshift[132400]: kubelet E0213 05:21:08.665572 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:21:12 localhost.localdomain microshift[132400]: kubelet I0213 05:21:12.252745 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:21:12 localhost.localdomain microshift[132400]: kubelet E0213 05:21:12.252845 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:23:14.25283396 -0500 EST m=+4681.433180227 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:21:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:13.286387 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:16 localhost.localdomain microshift[132400]: kubelet I0213 05:21:16.664582 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:21:16 localhost.localdomain microshift[132400]: kubelet E0213 05:21:16.664767 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:21:16 localhost.localdomain microshift[132400]: kubelet I0213 05:21:16.665065 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:21:16 localhost.localdomain microshift[132400]: kubelet E0213 05:21:16.665507 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:21:17 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:21:17.390321 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:21:17 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:21:17.390349 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:21:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:18.286427 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:20 localhost.localdomain microshift[132400]: kubelet I0213 05:21:20.663487 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:21:20 localhost.localdomain microshift[132400]: kubelet E0213 05:21:20.663908 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:21:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:23.287024 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:23 localhost.localdomain microshift[132400]: kubelet I0213 05:21:23.664221 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:21:23 localhost.localdomain microshift[132400]: kubelet E0213 05:21:23.664548 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:21:26 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:21:26.524478 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:21:26 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:21:26.524504 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:21:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:28.286930 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:28 localhost.localdomain microshift[132400]: kubelet I0213 05:21:28.664219 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:21:28 localhost.localdomain microshift[132400]: kubelet E0213 05:21:28.664740 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:21:31 localhost.localdomain microshift[132400]: kubelet I0213 05:21:31.663439 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:21:31 localhost.localdomain microshift[132400]: kubelet E0213 05:21:31.663960 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:21:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:33.287287 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:33 localhost.localdomain microshift[132400]: kubelet I0213 05:21:33.663619 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:21:33 localhost.localdomain microshift[132400]: kubelet E0213 05:21:33.664042 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:21:36 localhost.localdomain microshift[132400]: kubelet I0213 05:21:36.665280 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:21:36 localhost.localdomain microshift[132400]: kubelet E0213 05:21:36.665730 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:21:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:38.287252 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:39 localhost.localdomain microshift[132400]: kubelet I0213 05:21:39.663577 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:21:39 localhost.localdomain microshift[132400]: kubelet E0213 05:21:39.664218 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:21:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:43.287167 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:43 localhost.localdomain microshift[132400]: kubelet I0213 05:21:43.663327 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:21:43 localhost.localdomain microshift[132400]: kubelet E0213 05:21:43.663764 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:21:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:48.286751 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:48 localhost.localdomain microshift[132400]: kubelet I0213 05:21:48.664842 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:21:48 localhost.localdomain microshift[132400]: kubelet E0213 05:21:48.665629 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:21:49 localhost.localdomain microshift[132400]: kubelet E0213 05:21:49.141759 132400 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:21:49 localhost.localdomain microshift[132400]: kubelet E0213 05:21:49.141940 132400 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition" pod="openshift-ingress/router-default-85d64c4987-bbdnr" podUID=41b0089d-73d0-450a-84f5-8bfec82d97f9 Feb 13 05:21:50 localhost.localdomain microshift[132400]: kubelet I0213 05:21:50.663931 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:21:50 localhost.localdomain microshift[132400]: kubelet E0213 05:21:50.664184 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:21:51 localhost.localdomain microshift[132400]: kubelet I0213 05:21:51.663359 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:21:51 localhost.localdomain microshift[132400]: kubelet E0213 05:21:51.663896 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:21:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:53.286310 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:21:57 localhost.localdomain microshift[132400]: kubelet I0213 05:21:57.663914 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:21:57 localhost.localdomain microshift[132400]: kubelet E0213 05:21:57.664108 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:21:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:21:58.287387 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:00 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:22:00.947301 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:22:00 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:22:00.947323 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:22:01 localhost.localdomain microshift[132400]: kubelet I0213 05:22:01.664369 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:22:01 localhost.localdomain microshift[132400]: kubelet E0213 05:22:01.665288 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:22:02 localhost.localdomain microshift[132400]: kubelet I0213 05:22:02.665740 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:22:02 localhost.localdomain microshift[132400]: kubelet E0213 05:22:02.665949 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:22:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:03.286428 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:06 localhost.localdomain microshift[132400]: kubelet I0213 05:22:06.666068 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:22:06 localhost.localdomain microshift[132400]: kubelet E0213 05:22:06.666518 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:22:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:08.287137 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:10 localhost.localdomain microshift[132400]: kubelet I0213 05:22:10.663901 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:22:10 localhost.localdomain microshift[132400]: kubelet E0213 05:22:10.664324 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:22:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:13.286647 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:14 localhost.localdomain microshift[132400]: kubelet I0213 05:22:14.664975 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:22:14 localhost.localdomain microshift[132400]: kubelet E0213 05:22:14.665745 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:22:15 localhost.localdomain microshift[132400]: kubelet I0213 05:22:15.663952 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:22:15 localhost.localdomain microshift[132400]: kubelet E0213 05:22:15.664303 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:22:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:18.286451 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:19 localhost.localdomain microshift[132400]: kubelet I0213 05:22:19.663316 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:22:19 localhost.localdomain microshift[132400]: kubelet E0213 05:22:19.663861 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:22:22 localhost.localdomain microshift[132400]: kubelet I0213 05:22:22.664282 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:22:22 localhost.localdomain microshift[132400]: kubelet E0213 05:22:22.664755 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:22:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:23.286472 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:26 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:22:26.040633 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:22:26 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:22:26.040689 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:22:26 localhost.localdomain microshift[132400]: kubelet I0213 05:22:26.666130 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:22:26 localhost.localdomain microshift[132400]: kubelet E0213 05:22:26.666395 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:22:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:28.286418 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:30 localhost.localdomain microshift[132400]: kubelet I0213 05:22:30.664985 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:22:30 localhost.localdomain microshift[132400]: kubelet E0213 05:22:30.665592 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:22:30 localhost.localdomain microshift[132400]: kubelet I0213 05:22:30.665909 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:22:30 localhost.localdomain microshift[132400]: kubelet E0213 05:22:30.666520 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:22:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:33.286891 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:33 localhost.localdomain microshift[132400]: kubelet I0213 05:22:33.664162 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:22:33 localhost.localdomain microshift[132400]: kubelet E0213 05:22:33.664331 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:22:38 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:38.286588 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:38 localhost.localdomain microshift[132400]: kubelet I0213 05:22:38.663529 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:22:38 localhost.localdomain microshift[132400]: kubelet E0213 05:22:38.664187 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:22:41 localhost.localdomain microshift[132400]: kubelet I0213 05:22:41.664139 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:22:41 localhost.localdomain microshift[132400]: kubelet E0213 05:22:41.664803 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:22:42 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:22:42.236402 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:22:42 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:22:42.236552 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:22:42 localhost.localdomain microshift[132400]: kubelet I0213 05:22:42.665070 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:22:42 localhost.localdomain microshift[132400]: kubelet E0213 05:22:42.665304 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:22:43 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:43.287024 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:48 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:48.286861 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:48 localhost.localdomain microshift[132400]: kubelet I0213 05:22:48.664311 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:22:48 localhost.localdomain microshift[132400]: kubelet E0213 05:22:48.664508 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:22:53 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:53.286533 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:53 localhost.localdomain microshift[132400]: kubelet I0213 05:22:53.664186 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:22:53 localhost.localdomain microshift[132400]: kubelet E0213 05:22:53.665211 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:22:53 localhost.localdomain microshift[132400]: kubelet I0213 05:22:53.665319 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:22:53 localhost.localdomain microshift[132400]: kubelet E0213 05:22:53.665797 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:22:53 localhost.localdomain microshift[132400]: kubelet I0213 05:22:53.665881 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:22:53 localhost.localdomain microshift[132400]: kubelet E0213 05:22:53.666267 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:22:58 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:22:58.286527 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:22:59 localhost.localdomain microshift[132400]: kubelet I0213 05:22:59.664350 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:23:00 localhost.localdomain microshift[132400]: kubelet I0213 05:23:00.446614 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerStarted Data:7426dfc52ab9675e16f334b0a7fadc73bc100eaee0b7ad1dcaff80ab345d37f0} Feb 13 05:23:03 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:23:03.287171 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:23:04 localhost.localdomain microshift[132400]: kubelet I0213 05:23:04.664270 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:23:04 localhost.localdomain microshift[132400]: kubelet E0213 05:23:04.664483 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:23:05 localhost.localdomain microshift[132400]: kubelet I0213 05:23:05.667793 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:23:05 localhost.localdomain microshift[132400]: kubelet E0213 05:23:05.668086 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:23:06 localhost.localdomain microshift[132400]: kubelet I0213 05:23:06.666039 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:23:06 localhost.localdomain microshift[132400]: kubelet E0213 05:23:06.666411 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:23:06 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:23:06.792259 132400 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:23:06 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:23:06.792511 132400 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Feb 13 05:23:08 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:23:08.286436 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:23:13 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:23:13.286993 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:23:14 localhost.localdomain microshift[132400]: kubelet I0213 05:23:14.281272 132400 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle\") pod \"router-default-85d64c4987-bbdnr\" (UID: \"41b0089d-73d0-450a-84f5-8bfec82d97f9\") " pod="openshift-ingress/router-default-85d64c4987-bbdnr" Feb 13 05:23:14 localhost.localdomain microshift[132400]: kubelet E0213 05:23:14.281594 132400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle podName:41b0089d-73d0-450a-84f5-8bfec82d97f9 nodeName:}" failed. No retries permitted until 2023-02-13 05:25:16.281582552 -0500 EST m=+4803.461928821 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/41b0089d-73d0-450a-84f5-8bfec82d97f9-service-ca-bundle") pod "router-default-85d64c4987-bbdnr" (UID: "41b0089d-73d0-450a-84f5-8bfec82d97f9") : configmap references non-existent config key: service-ca.crt Feb 13 05:23:15 localhost.localdomain microshift[132400]: kubelet I0213 05:23:15.663336 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:23:15 localhost.localdomain microshift[132400]: kubelet E0213 05:23:15.668087 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:23:18 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:23:18.287068 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:23:18 localhost.localdomain microshift[132400]: kubelet I0213 05:23:18.664089 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:23:18 localhost.localdomain microshift[132400]: kubelet E0213 05:23:18.664522 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:23:21 localhost.localdomain microshift[132400]: kubelet I0213 05:23:21.663843 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:23:21 localhost.localdomain microshift[132400]: kubelet E0213 05:23:21.664152 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:23:23 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:23:23.286531 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:23:28 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:23:28.287175 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:23:29 localhost.localdomain microshift[132400]: kubelet I0213 05:23:29.663934 132400 scope.go:115] "RemoveContainer" containerID="267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e" Feb 13 05:23:29 localhost.localdomain microshift[132400]: kubelet E0213 05:23:29.664473 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dns pod=dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7)\"" pod="openshift-dns/dns-default-z4v2p" podUID=d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7 Feb 13 05:23:29 localhost.localdomain microshift[132400]: kubelet I0213 05:23:29.664900 132400 scope.go:115] "RemoveContainer" containerID="285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49" Feb 13 05:23:29 localhost.localdomain microshift[132400]: kubelet E0213 05:23:29.665197 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-node pod=topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)\"" pod="openshift-storage/topolvm-node-9bnp5" podUID=763e920a-b594-4485-bf77-dfed5dddbf03 Feb 13 05:23:33 localhost.localdomain microshift[132400]: sysconfwatch-controller I0213 05:23:33.286828 132400 net.go:46] ovn gateway IP address: 192.168.122.17 Feb 13 05:23:34 localhost.localdomain microshift[132400]: kubelet I0213 05:23:34.494978 132400 generic.go:332] "Generic (PLEG): container finished" podID=2e7bce65-b199-4d8a-bc2f-c63494419251 containerID="7426dfc52ab9675e16f334b0a7fadc73bc100eaee0b7ad1dcaff80ab345d37f0" exitCode=255 Feb 13 05:23:34 localhost.localdomain microshift[132400]: kubelet I0213 05:23:34.495096 132400 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" event=&{ID:2e7bce65-b199-4d8a-bc2f-c63494419251 Type:ContainerDied Data:7426dfc52ab9675e16f334b0a7fadc73bc100eaee0b7ad1dcaff80ab345d37f0} Feb 13 05:23:34 localhost.localdomain microshift[132400]: kubelet I0213 05:23:34.495191 132400 scope.go:115] "RemoveContainer" containerID="00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf" Feb 13 05:23:34 localhost.localdomain microshift[132400]: kubelet I0213 05:23:34.495386 132400 scope.go:115] "RemoveContainer" containerID="7426dfc52ab9675e16f334b0a7fadc73bc100eaee0b7ad1dcaff80ab345d37f0" Feb 13 05:23:34 localhost.localdomain microshift[132400]: kubelet E0213 05:23:34.495559 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=service-ca-controller pod=service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251)\"" pod="openshift-service-ca/service-ca-7bd9547b57-vhmkf" podUID=2e7bce65-b199-4d8a-bc2f-c63494419251 Feb 13 05:23:34 localhost.localdomain microshift[132400]: kubelet I0213 05:23:34.665334 132400 scope.go:115] "RemoveContainer" containerID="f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a" Feb 13 05:23:34 localhost.localdomain microshift[132400]: kubelet E0213 05:23:34.666456 132400 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"topolvm-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=topolvm-controller pod=topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e)\"" pod="openshift-storage/topolvm-controller-78cbfc4867-qdfs4" podUID=9744aca6-9463-42d2-a05e-f1e3af7b175e Feb 13 05:23:36 localhost.localdomain microshift[132400]: kube-apiserver W0213 05:23:36.851648 132400 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Feb 13 05:23:36 localhost.localdomain microshift[132400]: kube-apiserver E0213 05:23:36.851686 132400 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io)