-- Logs begin at Tue 2023-01-17 14:37:59 UTC, end at Tue 2023-01-17 15:01:58 UTC. -- Jan 17 14:38:18 edgenius systemd[1]: Starting MicroShift... Jan 17 14:38:19 edgenius microshift[2779]: ??? I0117 14:38:19.310152 2779 run.go:115] Starting MicroShift Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310698 2779 certchains.go:122] [admin-kubeconfig-signer] rotate at: 2032-01-13 09:57:57 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310718 2779 certchains.go:122] [admin-kubeconfig-signer admin-kubeconfig-client] rotate at: 2032-01-13 09:57:57 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310724 2779 certchains.go:122] [aggregator-signer] rotate at: 2023-09-12 09:57:57 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310729 2779 certchains.go:122] [aggregator-signer aggregator-client] rotate at: 2023-09-12 09:57:57 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310733 2779 certchains.go:122] [etcd-signer] rotate at: 2032-01-13 09:57:59 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310737 2779 certchains.go:122] [etcd-signer apiserver-etcd-client] rotate at: 2032-01-13 09:57:59 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310740 2779 certchains.go:122] [ingress-ca] rotate at: 2032-01-13 09:57:58 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310745 2779 certchains.go:122] [ingress-ca router-default-serving] rotate at: 2023-09-12 09:57:58 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310748 2779 certchains.go:122] [kube-apiserver-external-signer] rotate at: 2032-01-13 09:57:58 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310752 2779 certchains.go:122] [kube-apiserver-external-signer kube-external-serving] rotate at: 2023-09-19 14:38:19 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310756 2779 certchains.go:122] [kube-apiserver-localhost-signer] rotate at: 2032-01-13 09:57:58 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310760 2779 certchains.go:122] [kube-apiserver-localhost-signer kube-apiserver-localhost-serving] rotate at: 2023-09-12 09:57:58 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310763 2779 certchains.go:122] [kube-apiserver-service-network-signer] rotate at: 2032-01-13 09:57:59 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310767 2779 certchains.go:122] [kube-apiserver-service-network-signer kube-apiserver-service-network-serving] rotate at: 2023-09-12 09:57:59 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310770 2779 certchains.go:122] [kube-apiserver-to-kubelet-signer] rotate at: 2023-09-12 09:57:56 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310775 2779 certchains.go:122] [kube-apiserver-to-kubelet-signer kube-apiserver-to-kubelet-client] rotate at: 2023-09-12 09:57:56 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310781 2779 certchains.go:122] [kube-control-plane-signer] rotate at: 2023-09-12 09:57:55 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310803 2779 certchains.go:122] [kube-control-plane-signer cluster-policy-controller] rotate at: 2023-09-12 09:57:56 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310810 2779 certchains.go:122] [kube-control-plane-signer kube-controller-manager] rotate at: 2023-09-12 09:57:56 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310813 2779 certchains.go:122] [kube-control-plane-signer kube-scheduler] rotate at: 2023-09-12 09:57:56 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310817 2779 certchains.go:122] [kube-control-plane-signer route-controller-manager] rotate at: 2023-09-12 09:57:56 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310820 2779 certchains.go:122] [kubelet-signer] rotate at: 2023-09-12 09:57:57 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310824 2779 certchains.go:122] [kubelet-signer kube-csr-signer] rotate at: 2023-09-12 09:57:57 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310830 2779 certchains.go:122] [kubelet-signer kube-csr-signer kubelet-client] rotate at: 2023-09-12 09:57:57 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310834 2779 certchains.go:122] [kubelet-signer kube-csr-signer kubelet-server] rotate at: 2023-09-12 09:57:57 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310838 2779 certchains.go:122] [service-ca] rotate at: 2032-01-13 09:57:58 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? E0117 14:38:19.310841 2779 certchains.go:122] [service-ca route-controller-manager-serving] rotate at: 2023-09-12 09:57:58 +0000 UTC Jan 17 14:38:19 edgenius microshift[2779]: ??? I0117 14:38:19.311028 2779 run.go:126] Started service-manager Jan 17 14:38:19 edgenius microshift[2779]: etcd I0117 14:38:19.311077 2779 manager.go:114] Starting etcd Jan 17 14:38:19 edgenius microshift[2779]: sysconfwatch-controller I0117 14:38:19.311153 2779 manager.go:114] Starting sysconfwatch-controller Jan 17 14:38:19 edgenius microshift[2779]: sysconfwatch-controller I0117 14:38:19.311503 2779 sysconfwatch_linux.go:89] starting sysconfwatch-controller with IP address "172.27.117.179" Jan 17 14:38:19 edgenius microshift[2779]: sysconfwatch-controller I0117 14:38:19.311540 2779 sysconfwatch_linux.go:95] sysconfwatch-controller is ready Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.311Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://172.27.117.179:2380"]} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.312Z","caller":"embed/etcd.go:481","msg":"starting with peer TLS","tls-info":"cert = /var/lib/microshift/certs/etcd-signer/etcd-peer/peer.crt, key = /var/lib/microshift/certs/etcd-signer/etcd-peer/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/microshift/certs/etcd-signer/ca.crt, client-cert-auth = false, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"]} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.314Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.27.117.179:2379"]} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.316Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.3","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.19.2","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":true,"name":"edgenius","data-dir":"/var/lib/microshift/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/microshift/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.27.117.179:2380"],"listen-peer-urls":["https://172.27.117.179:2380"],"advertise-client-urls":["https://172.27.117.179:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.117.179:2379"],"listen-metrics-urls":["https://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s","max-learners":1} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.334Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/microshift/etcd/member/snap/db","took":"17.463814ms"} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.959Z","caller":"etcdserver/server.go:511","msg":"recovered v2 store from snapshot","snapshot-index":800008,"snapshot-size":"8.2 kB"} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.959Z","caller":"etcdserver/server.go:524","msg":"recovered v3 backend from snapshot","backend-size-bytes":8331264,"backend-size":"8.3 MB","backend-size-in-use-bytes":4747264,"backend-size-in-use":"4.7 MB"} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.985Z","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"55c2df2a192daa3c","local-member-id":"15d369d7483263c9","commit-index":811069} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"15d369d7483263c9 switched to configuration voters=(1572717068232582089)"} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"15d369d7483263c9 became follower at term 5"} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 15d369d7483263c9 [peers: [15d369d7483263c9], term: 5, commit: 811069, applied: 800008, lastindex: 811069, lastterm: 5]"} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.986Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.986Z","caller":"membership/cluster.go:280","msg":"recovered/added member from store","cluster-id":"55c2df2a192daa3c","local-member-id":"15d369d7483263c9","recovered-remote-peer-id":"15d369d7483263c9","recovered-remote-peer-urls":["https://172.27.117.179:2380"],"recovered-remote-peer-is-learner":false} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.986Z","caller":"membership/cluster.go:290","msg":"set cluster version from store","cluster-version":"3.5"} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"warn","ts":"2023-01-17T14:38:19.989Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.991Z","caller":"mvcc/kvstore.go:345","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":690693} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.994Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":691312} Jan 17 14:38:19 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:19.997Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.001Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"15d369d7483263c9","local-server-version":"3.5.3","cluster-id":"55c2df2a192daa3c","cluster-version":"3.5"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.001Z","caller":"etcdserver/server.go:745","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"15d369d7483263c9","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.003Z","caller":"embed/etcd.go:690","msg":"starting with client TLS","tls-info":"cert = /var/lib/microshift/certs/etcd-signer/etcd-serving/peer.crt, key = /var/lib/microshift/certs/etcd-signer/etcd-serving/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/microshift/certs/etcd-signer/ca.crt, client-cert-auth = false, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"]} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.003Z","caller":"embed/etcd.go:583","msg":"serving peer traffic","address":"172.27.117.179:2380"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.003Z","caller":"embed/etcd.go:555","msg":"cmux::serve","address":"172.27.117.179:2380"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.003Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"15d369d7483263c9","initial-advertise-peer-urls":["https://172.27.117.179:2380"],"listen-peer-urls":["https://172.27.117.179:2380"],"advertise-client-urls":["https://172.27.117.179:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.117.179:2379"],"listen-metrics-urls":["https://127.0.0.1:2381"]} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.003Z","caller":"embed/etcd.go:765","msg":"serving metrics","address":"https://127.0.0.1:2381"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"15d369d7483263c9 is starting a new election at term 5"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"15d369d7483263c9 became pre-candidate at term 5"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"15d369d7483263c9 received MsgPreVoteResp from 15d369d7483263c9 at term 5"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"15d369d7483263c9 became candidate at term 6"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"15d369d7483263c9 received MsgVoteResp from 15d369d7483263c9 at term 6"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"15d369d7483263c9 became leader at term 6"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 15d369d7483263c9 elected leader 15d369d7483263c9 at term 6"} Jan 17 14:38:20 edgenius microshift[2779]: etcd I0117 14:38:20.790930 2779 etcd.go:103] etcd is ready Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.790Z","caller":"etcdserver/server.go:2051","msg":"published local member to cluster through raft","local-member-id":"15d369d7483263c9","local-member-attributes":"{Name:edgenius ClientURLs:[https://172.27.117.179:2379]}","request-path":"/0/members/15d369d7483263c9/attributes","cluster-id":"55c2df2a192daa3c","publish-timeout":"7s"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.790Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} Jan 17 14:38:20 edgenius microshift[2779]: kube-apiserver I0117 14:38:20.791002 2779 manager.go:114] Starting kube-apiserver Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.791Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"172.27.117.179:2379"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.790Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} Jan 17 14:38:20 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:38:20.793Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} Jan 17 14:38:20 edgenius microshift[2779]: kube-apiserver I0117 14:38:20.793944 2779 kube-apiserver.go:321] "kube-apiserver" not yet ready: Get "https://127.0.0.1:6443/readyz": dial tcp 127.0.0.1:6443: connect: connection refused Jan 17 14:38:20 edgenius microshift[2779]: Flag --openshift-config has been deprecated, to be removed Jan 17 14:38:20 edgenius microshift[2779]: Flag --openshift-config has been deprecated, to be removed Jan 17 14:38:20 edgenius microshift[2779]: Flag --enable-logs-handler has been deprecated, This flag will be removed in v1.19 Jan 17 14:38:20 edgenius microshift[2779]: Flag --kubelet-read-only-port has been deprecated, kubelet-read-only-port is deprecated and will be removed. Jan 17 14:38:20 edgenius microshift[2779]: kube-apiserver I0117 14:38:20.796766 2779 server.go:620] external host was not specified, using 172.27.117.179 Jan 17 14:38:20 edgenius microshift[2779]: kube-apiserver I0117 14:38:20.796987 2779 server.go:201] Version: v1.25.0 Jan 17 14:38:20 edgenius microshift[2779]: kube-apiserver I0117 14:38:20.797022 2779 server.go:203] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.163676 2779 shared_informer.go:255] Waiting for caches to sync for node_authorizer Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.165617 2779 admission.go:83] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.165805 2779 admission.go:47] Admission plugin "autoscaling.openshift.io/ClusterResourceOverride" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.165840 2779 admission.go:33] Admission plugin "autoscaling.openshift.io/RunOnceDuration" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.165848 2779 admission.go:32] Admission plugin "scheduling.openshift.io/PodNodeConstraints" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.165978 2779 endpoint_admission.go:33] Admission plugin "network.openshift.io/RestrictedEndpointsAdmission" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.167862 2779 plugins.go:158] Loaded 19 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,PodNodeSelector,Priority,DefaultTolerationSeconds,PodTolerationRestriction,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,scheduling.openshift.io/OriginPodNodeEnvironment,security.openshift.io/SecurityContextConstraint,route.openshift.io/RouteHostAssignment,security.openshift.io/DefaultSecurityContextConstraints,MutatingAdmissionWebhook. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.167889 2779 plugins.go:161] Loaded 26 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,PodNodeSelector,Priority,PodTolerationRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,scheduling.openshift.io/OriginPodNodeEnvironment,network.openshift.io/ExternalIPRanger,security.openshift.io/SecurityContextConstraint,security.openshift.io/SCCExecRestrictions,route.openshift.io/IngressAdmission,operator.openshift.io/ValidateDNS,security.openshift.io/ValidateSecurityContextConstraints,config.openshift.io/ValidateNetwork,config.openshift.io/ValidateAPIRequestCount,config.openshift.io/RestrictExtremeWorkerLatencyProfile,route.openshift.io/ValidateRoute,operator.openshift.io/ValidateKubeControllerManager,ValidatingAdmissionWebhook,ResourceQuota. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.178564 2779 admission.go:83] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.178640 2779 admission.go:47] Admission plugin "autoscaling.openshift.io/ClusterResourceOverride" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.178672 2779 admission.go:33] Admission plugin "autoscaling.openshift.io/RunOnceDuration" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.178677 2779 admission.go:32] Admission plugin "scheduling.openshift.io/PodNodeConstraints" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.178754 2779 endpoint_admission.go:33] Admission plugin "network.openshift.io/RestrictedEndpointsAdmission" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.179303 2779 plugins.go:158] Loaded 19 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,PodNodeSelector,Priority,DefaultTolerationSeconds,PodTolerationRestriction,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,scheduling.openshift.io/OriginPodNodeEnvironment,security.openshift.io/SecurityContextConstraint,route.openshift.io/RouteHostAssignment,security.openshift.io/DefaultSecurityContextConstraints,MutatingAdmissionWebhook. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.179318 2779 plugins.go:161] Loaded 26 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,PodNodeSelector,Priority,PodTolerationRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,scheduling.openshift.io/OriginPodNodeEnvironment,network.openshift.io/ExternalIPRanger,security.openshift.io/SecurityContextConstraint,security.openshift.io/SCCExecRestrictions,route.openshift.io/IngressAdmission,operator.openshift.io/ValidateDNS,security.openshift.io/ValidateSecurityContextConstraints,config.openshift.io/ValidateNetwork,config.openshift.io/ValidateAPIRequestCount,config.openshift.io/RestrictExtremeWorkerLatencyProfile,route.openshift.io/ValidateRoute,operator.openshift.io/ValidateKubeControllerManager,ValidatingAdmissionWebhook,ResourceQuota. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.197288 2779 genericapiserver.go:690] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.198026 2779 instance.go:261] Using reconciler: lease Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.334291 2779 instance.go:575] API group "internal.apiserver.k8s.io" is not enabled, skipping. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.568960 2779 genericapiserver.go:690] Skipping API authentication.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.572703 2779 genericapiserver.go:690] Skipping API authorization.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.577103 2779 genericapiserver.go:690] Skipping API autoscaling/v2beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.581519 2779 genericapiserver.go:690] Skipping API batch/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.583303 2779 genericapiserver.go:690] Skipping API certificates.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.584460 2779 genericapiserver.go:690] Skipping API coordination.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.584511 2779 genericapiserver.go:690] Skipping API discovery.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.587200 2779 genericapiserver.go:690] Skipping API networking.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.587229 2779 genericapiserver.go:690] Skipping API networking.k8s.io/v1alpha1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.588190 2779 genericapiserver.go:690] Skipping API node.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.588216 2779 genericapiserver.go:690] Skipping API node.k8s.io/v1alpha1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.588244 2779 genericapiserver.go:690] Skipping API policy/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.590872 2779 genericapiserver.go:690] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.590901 2779 genericapiserver.go:690] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.591853 2779 genericapiserver.go:690] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.591878 2779 genericapiserver.go:690] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.594872 2779 genericapiserver.go:690] Skipping API storage.k8s.io/v1alpha1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.597518 2779 genericapiserver.go:690] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.600231 2779 genericapiserver.go:690] Skipping API apps/v1beta2 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.600264 2779 genericapiserver.go:690] Skipping API apps/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.601447 2779 genericapiserver.go:690] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.602483 2779 genericapiserver.go:690] Skipping API events.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.603058 2779 admission.go:83] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.603157 2779 admission.go:47] Admission plugin "autoscaling.openshift.io/ClusterResourceOverride" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.603184 2779 admission.go:33] Admission plugin "autoscaling.openshift.io/RunOnceDuration" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.603192 2779 admission.go:32] Admission plugin "scheduling.openshift.io/PodNodeConstraints" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.603258 2779 endpoint_admission.go:33] Admission plugin "network.openshift.io/RestrictedEndpointsAdmission" is not configured so it will be disabled. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.603783 2779 plugins.go:158] Loaded 19 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,PodNodeSelector,Priority,DefaultTolerationSeconds,PodTolerationRestriction,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,scheduling.openshift.io/OriginPodNodeEnvironment,security.openshift.io/SecurityContextConstraint,route.openshift.io/RouteHostAssignment,security.openshift.io/DefaultSecurityContextConstraints,MutatingAdmissionWebhook. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver I0117 14:38:21.603810 2779 plugins.go:161] Loaded 26 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,PodNodeSelector,Priority,PodTolerationRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,scheduling.openshift.io/OriginPodNodeEnvironment,network.openshift.io/ExternalIPRanger,security.openshift.io/SecurityContextConstraint,security.openshift.io/SCCExecRestrictions,route.openshift.io/IngressAdmission,operator.openshift.io/ValidateDNS,security.openshift.io/ValidateSecurityContextConstraints,config.openshift.io/ValidateNetwork,config.openshift.io/ValidateAPIRequestCount,config.openshift.io/RestrictExtremeWorkerLatencyProfile,route.openshift.io/ValidateRoute,operator.openshift.io/ValidateKubeControllerManager,ValidatingAdmissionWebhook,ResourceQuota. Jan 17 14:38:21 edgenius microshift[2779]: kube-apiserver W0117 14:38:21.617611 2779 genericapiserver.go:690] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.523030 2779 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.523048 2779 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt" Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.523051 2779 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key" Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.523660 2779 secure_serving.go:210] Serving securely on [::]:6443 Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.523748 2779 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key" Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542149 2779 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542192 2779 controller.go:80] Starting OpenAPI V3 AggregationController Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542213 2779 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key" Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542477 2779 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542492 2779 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542495 2779 tlsconfig.go:240] "Starting DynamicServingCertificateController" Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542547 2779 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt" Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542639 2779 apiservice_controller.go:97] Starting APIServiceRegistrationController Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542658 2779 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542678 2779 available_controller.go:513] Starting AvailableConditionController Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542682 2779 cache.go:32] Waiting for caches to sync for AvailableConditionController controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542909 2779 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.544239 2779 apf_controller.go:300] Starting API Priority and Fairness config controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.544319 2779 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/microshift/certs/aggregator-signer/aggregator-client/client.crt::/var/lib/microshift/certs/aggregator-signer/aggregator-client/client.key" Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.544677 2779 customresource_discovery_controller.go:209] Starting DiscoveryController Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.545392 2779 autoregister_controller.go:141] Starting autoregister controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.545447 2779 cache.go:32] Waiting for caches to sync for autoregister controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.545508 2779 controller.go:83] Starting OpenAPI AggregationController Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.542146 2779 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key" Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.545993 2779 controller.go:85] Starting OpenAPI controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.546042 2779 controller.go:85] Starting OpenAPI V3 controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.546066 2779 naming_controller.go:291] Starting NamingConditionController Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.546079 2779 establishing_controller.go:76] Starting EstablishingController Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.546089 2779 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.546106 2779 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.546117 2779 crd_finalizer.go:266] Starting CRDFinalizer Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.546563 2779 crdregistration_controller.go:112] Starting crd-autoregister controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.546574 2779 shared_informer.go:255] Waiting for caches to sync for crd-autoregister Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.557547 2779 kube-apiserver.go:321] "kube-apiserver" not yet ready: unknown Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver W0117 14:38:22.565855 2779 sdn_readyz_wait.go:102] api.openshift-oauth-apiserver.svc endpoints were not found Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver W0117 14:38:22.565994 2779 sdn_readyz_wait.go:102] api.openshift-apiserver.svc endpoints were not found Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver E0117 14:38:22.566062 2779 sdn_readyz_wait.go:138] api-openshift-oauth-apiserver-available did not find an openshift-oauth-apiserver endpoint Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver E0117 14:38:22.566106 2779 sdn_readyz_wait.go:138] api-openshift-apiserver-available did not find an openshift-apiserver endpoint Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver E0117 14:38:22.566586 2779 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.643706 2779 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.643748 2779 cache.go:39] Caches are synced for AvailableConditionController controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.643762 2779 cache.go:39] Caches are synced for APIServiceRegistrationController controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.644359 2779 apf_controller.go:305] Running API Priority and Fairness config worker Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.645652 2779 cache.go:39] Caches are synced for autoregister controller Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.653516 2779 shared_informer.go:262] Caches are synced for crd-autoregister Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.677201 2779 shared_informer.go:262] Caches are synced for node_authorizer Jan 17 14:38:22 edgenius microshift[2779]: kube-apiserver I0117 14:38:22.807922 2779 kube-apiserver.go:321] "kube-apiserver" not yet ready: an error on the server ("[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]etcd-readiness ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[+]shutdown ok\nreadyz check failed") has prevented the request from succeeding Jan 17 14:38:23 edgenius microshift[2779]: kube-apiserver I0117 14:38:23.351690 2779 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). Jan 17 14:38:23 edgenius microshift[2779]: kube-apiserver I0117 14:38:23.545521 2779 storage_scheduling.go:111] all system priority classes are created successfully or already exist. Jan 17 14:38:23 edgenius microshift[2779]: kube-apiserver I0117 14:38:23.795447 2779 kube-apiserver.go:335] "kube-apiserver" is ready Jan 17 14:38:23 edgenius microshift[2779]: kube-scheduler I0117 14:38:23.795490 2779 manager.go:114] Starting kube-scheduler Jan 17 14:38:23 edgenius microshift[2779]: kube-controller-manager I0117 14:38:23.795523 2779 manager.go:114] Starting kube-controller-manager Jan 17 14:38:23 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:23.795532 2779 manager.go:114] Starting openshift-crd-manager Jan 17 14:38:23 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:23.805147 2779 crd.go:155] Applying openshift CRD crd/0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.079311 2779 serving.go:342] Generated self-signed cert (/var/run/kubernetes/kube-controller-manager.crt, /var/run/kubernetes/kube-controller-manager.key) Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.459742 2779 serving.go:348] Generated self-signed cert in-memory Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.552688 2779 controllermanager.go:189] Version: v1.25.0 Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.552750 2779 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.555647 2779 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.555672 2779 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.555651 2779 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.555688 2779 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.555660 2779 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.555703 2779 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.556034 2779 secure_serving.go:210] Serving securely on 127.0.0.1:10257 Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.556353 2779 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/kubernetes/kube-controller-manager.crt::/var/run/kubernetes/kube-controller-manager.key" Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.559097 2779 tlsconfig.go:240] "Starting DynamicServingCertificateController" Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.568653 2779 shared_informer.go:255] Waiting for caches to sync for tokens Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.571658 2779 controllermanager.go:649] Started "podgc" Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.572027 2779 gc_controller.go:99] Starting GC controller Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.572074 2779 shared_informer.go:255] Waiting for caches to sync for GC Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.573793 2779 controllermanager.go:649] Started "daemonset" Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager W0117 14:38:24.573809 2779 controllermanager.go:614] "bootstrapsigner" is disabled Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.573830 2779 daemon_controller.go:297] Starting daemon sets controller Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.573841 2779 shared_informer.go:255] Waiting for caches to sync for daemon sets Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.575275 2779 node_ipam_controller.go:91] Sending events to api server. Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.656699 2779 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.656741 2779 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.656904 2779 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file Jan 17 14:38:24 edgenius microshift[2779]: kube-controller-manager I0117 14:38:24.669385 2779 shared_informer.go:262] Caches are synced for tokens Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.680375 2779 server.go:152] "Starting Kubernetes Scheduler" version="v1.25.0" Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.680512 2779 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.691360 2779 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.691510 2779 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.691399 2779 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.691590 2779 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.691435 2779 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.691634 2779 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.691643 2779 secure_serving.go:210] Serving securely on [::]:10259 Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.691726 2779 tlsconfig.go:240] "Starting DynamicServingCertificateController" Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.791761 2779 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.791839 2779 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController Jan 17 14:38:24 edgenius microshift[2779]: kube-scheduler I0117 14:38:24.791799 2779 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file Jan 17 14:38:27 edgenius microshift[2779]: kube-apiserver W0117 14:38:27.556782 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:38:27 edgenius microshift[2779]: kube-apiserver E0117 14:38:27.556829 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:38:27 edgenius microshift[2779]: kube-apiserver W0117 14:38:27.558381 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:38:27 edgenius microshift[2779]: kube-apiserver E0117 14:38:27.558423 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:38:28 edgenius microshift[2779]: kube-apiserver W0117 14:38:28.447983 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:38:28 edgenius microshift[2779]: kube-apiserver E0117 14:38:28.448031 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:38:28 edgenius microshift[2779]: kube-scheduler I0117 14:38:28.798502 2779 kube-scheduler.go:89] kube-scheduler is ready Jan 17 14:38:28 edgenius microshift[2779]: kube-controller-manager I0117 14:38:28.798545 2779 kube-controller-manager.go:126] kube-controller-manager is ready Jan 17 14:38:28 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:28.808078 2779 crd.go:166] Applied openshift CRD crd/0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml Jan 17 14:38:28 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:28.808129 2779 crd.go:155] Applying openshift CRD crd/0000_03_security-openshift_01_scc.crd.yaml Jan 17 14:38:29 edgenius microshift[2779]: kube-apiserver W0117 14:38:29.156924 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:38:29 edgenius microshift[2779]: kube-apiserver E0117 14:38:29.156965 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:38:31 edgenius microshift[2779]: kube-apiserver W0117 14:38:31.262395 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:38:31 edgenius microshift[2779]: kube-apiserver E0117 14:38:31.262461 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:38:32 edgenius microshift[2779]: kube-apiserver W0117 14:38:32.045679 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:38:32 edgenius microshift[2779]: kube-apiserver E0117 14:38:32.046169 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:38:33 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:33.817972 2779 crd.go:166] Applied openshift CRD crd/0000_03_security-openshift_01_scc.crd.yaml Jan 17 14:38:33 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:33.818062 2779 crd.go:155] Applying openshift CRD crd/route.crd.yaml Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.592559 2779 range_allocator.go:76] Sending events to api server. Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.592800 2779 range_allocator.go:104] No Service CIDR provided. Skipping filtering out service addresses. Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.592809 2779 range_allocator.go:110] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.592872 2779 controllermanager.go:649] Started "nodeipam" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.592999 2779 node_ipam_controller.go:154] Starting ipam controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.593011 2779 shared_informer.go:255] Waiting for caches to sync for node Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.595578 2779 controllermanager.go:649] Started "job" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.595836 2779 job_controller.go:196] Starting job controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.595878 2779 shared_informer.go:255] Waiting for caches to sync for job Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.599251 2779 controllermanager.go:649] Started "csrsigning" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.599587 2779 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-serving" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.599607 2779 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-serving Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.599647 2779 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.599655 2779 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-client Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.599705 2779 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.599717 2779 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.599820 2779 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.599864 2779 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.599906 2779 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.600181 2779 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.600263 2779 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager E0117 14:38:34.604409 2779 core.go:218] failed to start cloud node lifecycle controller: no cloud provider provided Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager W0117 14:38:34.604546 2779 controllermanager.go:627] Skipping "cloud-node-lifecycle" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.607578 2779 controllermanager.go:649] Started "ephemeral-volume" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.608082 2779 controller.go:169] Starting ephemeral volume controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.608110 2779 shared_informer.go:255] Waiting for caches to sync for ephemeral Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.616503 2779 controllermanager.go:649] Started "endpoint" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.616811 2779 endpoints_controller.go:182] Starting endpoint controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.616870 2779 shared_informer.go:255] Waiting for caches to sync for endpoint Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.622751 2779 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.crt::/var/lib/microshift/certs/kubelet-csr-signer-signer/csr-signer/ca.key" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.623478 2779 controllermanager.go:649] Started "endpointslicemirroring" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.623674 2779 endpointslicemirroring_controller.go:216] Starting EndpointSliceMirroring controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.623725 2779 shared_informer.go:255] Waiting for caches to sync for endpoint_slice_mirroring Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701532 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for edgemetadata.edge.edgenius.abb Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701708 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for controllerrevisions.apps Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701732 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for daemonsets.apps Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701753 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for statefulsets.apps Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701793 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701817 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for challenges.acme.cert-manager.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701864 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for limitranges Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701884 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701904 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for leases.coordination.k8s.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701917 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for certificaterequests.cert-manager.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701943 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for podtemplates Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.701962 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.702079 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for issuers.cert-manager.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.702138 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for jobs.batch Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.702167 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.702264 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.702299 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for orders.acme.cert-manager.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.702320 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpoints Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.702558 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.702616 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for certificates.cert-manager.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.702646 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for routes.route.openshift.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.702704 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for edgemodules.edge.edgenius.abb Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.705063 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for serviceaccounts Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.705472 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for replicasets.apps Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.705721 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.705779 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for deployments.apps Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.705840 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for cronjobs.batch Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.705956 2779 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.706052 2779 controllermanager.go:649] Started "resourcequota" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.706442 2779 resource_quota_controller.go:277] Starting resource quota controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.707432 2779 shared_informer.go:255] Waiting for caches to sync for resource quota Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.708360 2779 resource_quota_monitor.go:295] QuotaMonitor running Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.762444 2779 controllermanager.go:649] Started "serviceaccount" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.763386 2779 serviceaccounts_controller.go:117] Starting service account controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.763650 2779 shared_informer.go:255] Waiting for caches to sync for service account Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.767324 2779 controllermanager.go:649] Started "replicaset" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.767625 2779 replica_set.go:205] Starting replicaset controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.767665 2779 shared_informer.go:255] Waiting for caches to sync for ReplicaSet Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager E0117 14:38:34.782059 2779 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager W0117 14:38:34.782540 2779 controllermanager.go:627] Skipping "service" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.786123 2779 controllermanager.go:649] Started "clusterrole-aggregation" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.786456 2779 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.788660 2779 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.793968 2779 controllermanager.go:649] Started "garbagecollector" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.794146 2779 garbagecollector.go:154] Starting garbage collector controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.794157 2779 shared_informer.go:255] Waiting for caches to sync for garbage collector Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.795252 2779 graph_builder.go:291] GraphBuilder running Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.805827 2779 controllermanager.go:649] Started "disruption" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.806310 2779 disruption.go:421] Sending events to api server. Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.806445 2779 disruption.go:432] Starting disruption controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.806502 2779 shared_informer.go:255] Waiting for caches to sync for disruption Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.810992 2779 controllermanager.go:649] Started "cronjob" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.811347 2779 cronjob_controllerv2.go:135] "Starting cronjob controller v2" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.811451 2779 shared_informer.go:255] Waiting for caches to sync for cronjob Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.814152 2779 controllermanager.go:649] Started "csrcleaner" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.814446 2779 cleaner.go:82] Starting CSR cleaner controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.821274 2779 node_lifecycle_controller.go:497] Controller will reconcile labels. Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.821949 2779 controllermanager.go:649] Started "nodelifecycle" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.822359 2779 node_lifecycle_controller.go:532] Sending events to api server. Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.822496 2779 node_lifecycle_controller.go:543] Starting node controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.822618 2779 shared_informer.go:255] Waiting for caches to sync for taint Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.855757 2779 controllermanager.go:649] Started "endpointslice" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.857663 2779 endpointslice_controller.go:261] Starting endpoint slice controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.857943 2779 shared_informer.go:255] Waiting for caches to sync for endpoint_slice Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.915758 2779 controllermanager.go:649] Started "namespace" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.918114 2779 namespace_controller.go:200] Starting namespace controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.918216 2779 shared_informer.go:255] Waiting for caches to sync for namespace Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.920074 2779 controllermanager.go:649] Started "statefulset" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.920340 2779 stateful_set.go:152] Starting stateful set controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.920364 2779 shared_informer.go:255] Waiting for caches to sync for stateful set Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.922835 2779 controllermanager.go:649] Started "csrapproving" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.922981 2779 core.go:228] Will not configure cloud provider routes for allocate-node-cidrs: true, configure-cloud-routes: false. Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager W0117 14:38:34.923025 2779 controllermanager.go:627] Skipping "route" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.923161 2779 certificate_controller.go:112] Starting certificate controller "csrapproving" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.923217 2779 shared_informer.go:255] Waiting for caches to sync for certificate-csrapproving Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager W0117 14:38:34.930933 2779 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager E0117 14:38:34.931116 2779 plugins.go:616] "Error initializing dynamic plugin prober" err="error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.933293 2779 controllermanager.go:649] Started "attachdetach" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.939276 2779 controllermanager.go:649] Started "persistentvolume-expander" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.939464 2779 attach_detach_controller.go:328] Starting attach detach controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.939916 2779 shared_informer.go:255] Waiting for caches to sync for attach detach Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.939400 2779 expand_controller.go:340] Starting expand controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.939978 2779 shared_informer.go:255] Waiting for caches to sync for expand Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.944598 2779 controllermanager.go:649] Started "replicationcontroller" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.944869 2779 replica_set.go:205] Starting replicationcontroller controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.944883 2779 shared_informer.go:255] Waiting for caches to sync for ReplicationController Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.950279 2779 controllermanager.go:649] Started "deployment" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager W0117 14:38:34.950324 2779 controllermanager.go:614] "ttl" is disabled Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager W0117 14:38:34.950335 2779 controllermanager.go:614] "tokencleaner" is disabled Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.950589 2779 deployment_controller.go:160] "Starting controller" controller="deployment" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.950607 2779 shared_informer.go:255] Waiting for caches to sync for deployment Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.954468 2779 controllermanager.go:649] Started "persistentvolume-binder" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.954756 2779 pv_controller_base.go:335] Starting persistent volume controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.954788 2779 shared_informer.go:255] Waiting for caches to sync for persistent volume Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.956767 2779 controllermanager.go:649] Started "pv-protection" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.956871 2779 pv_protection_controller.go:79] Starting PV protection controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.956879 2779 shared_informer.go:255] Waiting for caches to sync for PV protection Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.958590 2779 controllermanager.go:649] Started "service-ca-cert-publisher" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.958719 2779 publisher.go:86] Starting service CA certificate configmap publisher Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.958733 2779 shared_informer.go:255] Waiting for caches to sync for crt configmap Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.981281 2779 controllermanager.go:649] Started "horizontalpodautoscaling" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.981446 2779 horizontal.go:168] Starting HPA controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.981464 2779 shared_informer.go:255] Waiting for caches to sync for HPA Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.984501 2779 controllermanager.go:649] Started "pvc-protection" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.984701 2779 pvc_protection_controller.go:103] "Starting PVC protection controller" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.984713 2779 shared_informer.go:255] Waiting for caches to sync for PVC protection Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.986772 2779 controllermanager.go:649] Started "ttl-after-finished" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.986808 2779 ttlafterfinished_controller.go:109] Starting TTL after finished controller Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.986823 2779 shared_informer.go:255] Waiting for caches to sync for TTL after finished Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.989149 2779 controllermanager.go:649] Started "root-ca-cert-publisher" Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.989287 2779 publisher.go:107] Starting root CA certificate configmap publisher Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.989315 2779 shared_informer.go:255] Waiting for caches to sync for crt configmap Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager I0117 14:38:34.991781 2779 shared_informer.go:255] Waiting for caches to sync for resource quota Jan 17 14:38:34 edgenius microshift[2779]: kube-controller-manager W0117 14:38:34.998922 2779 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="edgenius" does not exist Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.000963 2779 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.001026 2779 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.001049 2779 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.001067 2779 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.011707 2779 shared_informer.go:262] Caches are synced for cronjob Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.018264 2779 shared_informer.go:262] Caches are synced for endpoint Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.018336 2779 shared_informer.go:262] Caches are synced for namespace Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.025783 2779 shared_informer.go:262] Caches are synced for certificate-csrapproving Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.042044 2779 shared_informer.go:262] Caches are synced for expand Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.042290 2779 shared_informer.go:262] Caches are synced for attach detach Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager E0117 14:38:35.042470 2779 attach_detach_controller.go:440] Error creating spec for volume "pvc-volume-edgefileprocessor", pod "edgenius"/"edgefileprocessor-6b88cf4fb5-hnt7t": error processing PVC "edgenius"/"pvc-volume-edgefileprocessor": PVC edgenius/pvc-volume-edgefileprocessor has non-bound phase ("Pending") or empty pvc.Spec.VolumeName ("") Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.043415 2779 shared_informer.go:255] Waiting for caches to sync for garbage collector Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.047126 2779 shared_informer.go:262] Caches are synced for ReplicationController Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.057985 2779 shared_informer.go:262] Caches are synced for endpoint_slice Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.059216 2779 shared_informer.go:262] Caches are synced for deployment Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.061440 2779 shared_informer.go:262] Caches are synced for crt configmap Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.063729 2779 shared_informer.go:262] Caches are synced for service account Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.070755 2779 shared_informer.go:262] Caches are synced for ReplicaSet Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.072130 2779 shared_informer.go:262] Caches are synced for GC Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.072529 2779 shared_informer.go:262] Caches are synced for persistent volume Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.080545 2779 shared_informer.go:262] Caches are synced for PV protection Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager E0117 14:38:35.087348 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.088275 2779 shared_informer.go:262] Caches are synced for TTL after finished Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.088387 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.089031 2779 shared_informer.go:262] Caches are synced for ClusterRoleAggregator Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.091316 2779 shared_informer.go:262] Caches are synced for daemon sets Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.091338 2779 shared_informer.go:255] Waiting for caches to sync for daemon sets Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.091346 2779 shared_informer.go:262] Caches are synced for daemon sets Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.103233 2779 shared_informer.go:262] Caches are synced for node Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.103360 2779 range_allocator.go:166] Starting range CIDR allocator Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.103368 2779 shared_informer.go:255] Waiting for caches to sync for cidrallocator Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.103382 2779 shared_informer.go:262] Caches are synced for cidrallocator Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.112218 2779 shared_informer.go:262] Caches are synced for ephemeral Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.131947 2779 shared_informer.go:262] Caches are synced for taint Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.132041 2779 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager W0117 14:38:35.132132 2779 node_lifecycle_controller.go:1058] Missing timestamp for Node edgenius. Assuming now as a timestamp. Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.132170 2779 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal. Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.145678 2779 taint_manager.go:204] "Starting NoExecuteTaintManager" Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.145741 2779 taint_manager.go:209] "Sending events to api server" Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.147759 2779 event.go:294] "Event occurred" object="edgenius" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node edgenius event: Registered Node edgenius in Controller" Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.257047 2779 shared_informer.go:262] Caches are synced for HPA Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.257106 2779 shared_informer.go:262] Caches are synced for PVC protection Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.257168 2779 shared_informer.go:262] Caches are synced for crt configmap Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.257350 2779 shared_informer.go:262] Caches are synced for job Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.257395 2779 shared_informer.go:262] Caches are synced for disruption Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.257446 2779 shared_informer.go:262] Caches are synced for stateful set Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.257488 2779 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.692046 2779 shared_informer.go:262] Caches are synced for resource quota Jan 17 14:38:35 edgenius microshift[2779]: kube-apiserver W0117 14:38:35.703031 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:38:35 edgenius microshift[2779]: kube-apiserver E0117 14:38:35.703418 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.709699 2779 shared_informer.go:262] Caches are synced for resource quota Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.843897 2779 shared_informer.go:262] Caches are synced for garbage collector Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.895476 2779 shared_informer.go:262] Caches are synced for garbage collector Jan 17 14:38:35 edgenius microshift[2779]: kube-controller-manager I0117 14:38:35.895641 2779 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage Jan 17 14:38:36 edgenius microshift[2779]: kube-apiserver W0117 14:38:36.403581 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:38:36 edgenius microshift[2779]: kube-apiserver E0117 14:38:36.403636 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:38:38 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:38.839387 2779 crd.go:166] Applied openshift CRD crd/route.crd.yaml Jan 17 14:38:38 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:38.839530 2779 crd.go:155] Applying openshift CRD components/odf-lvm/topolvm.io_logicalvolumes.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:43.842909 2779 crd.go:166] Applied openshift CRD components/odf-lvm/topolvm.io_logicalvolumes.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:43.842964 2779 openshift-crd-manager.go:46] openshift-crd-manager applied default CRDs Jan 17 14:38:43 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:43.842971 2779 openshift-crd-manager.go:48] openshift-crd-manager waiting for CRDs acceptance before proceeding Jan 17 14:38:43 edgenius microshift[2779]: kube-controller-manager I0117 14:38:43.842988 2779 core.go:170] Applying corev1 api controllers/kube-controller-manager/namespace-openshift-kube-controller-manager.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:43.843473 2779 crd.go:81] Waiting for crd crd/0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml condition.type: established Jan 17 14:38:43 edgenius microshift[2779]: kube-controller-manager I0117 14:38:43.844607 2779 core.go:170] Applying corev1 api core/namespace-openshift-infra.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:43.845069 2779 crd.go:81] Waiting for crd crd/0000_03_security-openshift_01_scc.crd.yaml condition.type: established Jan 17 14:38:43 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:43.848641 2779 crd.go:81] Waiting for crd crd/route.crd.yaml condition.type: established Jan 17 14:38:43 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:43.852830 2779 crd.go:81] Waiting for crd components/odf-lvm/topolvm.io_logicalvolumes.yaml condition.type: established Jan 17 14:38:43 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:43.855104 2779 openshift-crd-manager.go:52] openshift-crd-manager all CRDs are ready Jan 17 14:38:43 edgenius microshift[2779]: openshift-crd-manager I0117 14:38:43.855148 2779 manager.go:119] openshift-crd-manager completed Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.855242 2779 manager.go:114] Starting cluster-policy-controller Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.855248 2779 manager.go:114] Starting route-controller-manager Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.855254 2779 manager.go:114] Starting openshift-default-scc-manager Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.855934 2779 core.go:170] Applying corev1 api controllers/route-controller-manager/0000_50_cluster-openshift-route-controller-manager_00_namespace.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.857571 2779 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-anyuid.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.859680 2779 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-hostaccess.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.861609 2779 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-hostmount-anyuid.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.864033 2779 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-hostnetwork-v2.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.866333 2779 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-hostnetwork.yaml Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.867510 2779 policy_controller.go:88] Started "openshift.io/namespace-security-allocation" Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller W0117 14:38:43.867613 2779 policy_controller.go:74] "openshift.io/resourcequota" is disabled Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller W0117 14:38:43.867653 2779 policy_controller.go:74] "openshift.io/cluster-quota-reconciliation" is disabled Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.867526 2779 event.go:285] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.867547 2779 base_controller.go:67] Waiting for caches to sync for namespace-security-allocation-controller Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.868285 2779 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-nonroot-v2.yaml Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.869068 2779 policy_controller.go:88] Started "openshift.io/cluster-csr-approver" Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.869163 2779 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_csr-approver-controller Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.873694 2779 policy_controller.go:88] Started "openshift.io/podsecurity-admission-label-syncer" Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.873706 2779 event.go:285] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.873719 2779 base_controller.go:67] Waiting for caches to sync for pod-security-admission-label-synchronization-controller Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.873718 2779 policy_controller.go:91] Started Origin Controllers Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.873708 2779 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-nonroot.yaml Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877741 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:bootstrap-signer" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877782 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:cloud-provider" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877798 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:token-cleaner" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877809 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:sa-creating-route-controller-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877823 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:controller:service-ca" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877831 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "edgenius-orchestrator-leader-election-role" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877843 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "fileProcessorRole" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877851 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-controller-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877863 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "topolvm-csi-resizer" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877869 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cert-manager:leaderelection" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877881 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877889 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "topolvm-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877902 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cert-manager-webhook:dynamic-serving" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877909 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cert-manager-cainjector:leaderelection" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877922 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877930 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "topolvm-csi-provisioner" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877943 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:bootstrap-signer" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877951 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-ovn-kubernetes-node" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.877964 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-locking-openshift-route-controller-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879364 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "default" should be enqueued: namespace "default" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879391 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879405 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879411 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879424 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-route-controller-manager" should be enqueued: namespace "openshift-route-controller-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879446 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "edgenius" should be enqueued: namespace "edgenius" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879460 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879467 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879478 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879484 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879496 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "cert-manager" should be enqueued: namespace "cert-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879502 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "cert-manager" should be enqueued: namespace "cert-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879513 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879519 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879531 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879539 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879552 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879558 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879570 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-ingress" should be enqueued: namespace "openshift-ingress" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879576 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879587 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879593 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879604 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "edgenius" should be enqueued: namespace "edgenius" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879611 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879622 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879628 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879639 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879646 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879657 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-ingress" should be enqueued: namespace "openshift-ingress" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879663 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879674 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879679 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879689 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879695 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879705 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-public" should be enqueued: namespace "kube-public" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879712 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879721 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879727 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879739 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-storage" should be enqueued: namespace "openshift-storage" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879745 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-node-lease" should be enqueued: namespace "kube-node-lease" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879755 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879760 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879771 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879776 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879788 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879794 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879805 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-route-controller-manager" should be enqueued: namespace "openshift-route-controller-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879818 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "cert-manager" should be enqueued: namespace "cert-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879828 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879835 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879845 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "edgenius" should be enqueued: namespace "edgenius" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879851 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879863 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879869 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879880 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879886 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879897 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879904 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "edgenius" should be enqueued: namespace "edgenius" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879913 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller E0117 14:38:43.879922 2779 podsecurity_label_sync_controller.go:268] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.881241 2779 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-privileged.yaml Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881405 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ovn-kubernetes-node" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881440 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:statefulset-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881459 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager:ingress-to-route-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881468 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:replication-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881481 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "topolvm-csi-provisioner" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881489 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cert-manager-controller-approve:cert-manager-io" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881500 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cert-manager-controller-orders" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881510 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "edgenius-orchestrator-proxy-role" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881521 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pv-protection-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881529 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ephemeral-volume-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881543 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:horizontal-pod-autoscaler" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881551 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:persistent-volume-binder" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881563 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-account-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881572 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cert-manager-webhook:subjectaccessreviews" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881584 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:clusterrole-aggregation-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881591 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:cronjob-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881604 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:deployment-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881611 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881622 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "topolvm-node" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881628 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881641 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:node-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881647 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:discovery" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881658 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881666 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:podsecurity-admission-label-syncer-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881679 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cert-manager-controller-challenges" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881690 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:job-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881703 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:root-ca-cert-publisher" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881709 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:monitoring" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881721 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-ca-cert-publisher" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881729 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881741 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:cluster-csr-approver-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881751 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:service-ca" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881765 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cert-manager-controller-clusterissuers" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881772 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpointslice-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881785 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pvc-protection-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881819 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:resourcequota-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881844 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:scc:anyuid" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881878 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpoint-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881892 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-admin" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881898 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881906 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cert-manager-controller-ingress-shim" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881911 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:attachdetach-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881919 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:daemon-set-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881936 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-route-controller-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881945 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "topolvm-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881950 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cert-manager-cainjector" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.881968 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:generic-garbage-collector" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882003 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882016 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:tokenreview-openshift-route-controller-manager" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882021 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ttl-after-finished-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882029 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882036 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cert-manager-controller-certificatesigningrequests" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882044 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:namespace-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882049 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pod-garbage-collector" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882056 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:replicaset-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882062 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cert-manager-controller-certificates" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882069 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882076 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-dns" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882083 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ttl-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882101 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "topolvm-node-scc" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882109 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:scc:restricted-v2" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882130 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882142 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882149 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:expand-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882156 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:route-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882161 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-dns" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882168 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ingress-router" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882173 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:certificate-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882181 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:disruption-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882185 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "topolvm-csi-resizer" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882192 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:namespace-security-allocation-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882197 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cert-manager-controller-issuers" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882202 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "edgenius-orchestrator-manager-role" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882209 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpointslicemirroring-controller" not found Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.882214 2779 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.884696 2779 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-restricted-v2.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.886448 2779 scc.go:87] Applying scc api controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_scc-restricted.yaml Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.888632 2779 rbac.go:144] Applying rbac controllers/route-controller-manager/ingress-to-route-controller-clusterrole.yaml Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.890250 2779 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-informer-clusterrole.yaml Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.891566 2779 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-tokenreview-clusterrole.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.893213 2779 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-anyuid.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.894558 2779 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-hostaccess.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.895859 2779 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-hostmount-anyuid.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.897471 2779 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork-v2.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.898748 2779 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.899982 2779 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-nonroot-v2.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.901512 2779 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-nonroot.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.902887 2779 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-privileged.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.904195 2779 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-restricted-v2.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.905576 2779 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_cr-scc-restricted.yaml Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.906953 2779 rbac.go:144] Applying rbac controllers/route-controller-manager/ingress-to-route-controller-clusterrolebinding.yaml Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.908278 2779 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-informer-clusterrolebinding.yaml Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.909474 2779 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-tokenreview-clusterrolebinding.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.910799 2779 rbac.go:144] Applying rbac controllers/openshift-default-scc-manager/0000_20_kube-apiserver-operator_00_crb-systemauthenticated-scc-restricted-v2.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.912005 2779 openshift-default-scc-manager.go:50] openshift-default-scc-manager applied default SCCs Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.912027 2779 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-leader-role.yaml Jan 17 14:38:43 edgenius microshift[2779]: openshift-default-scc-manager I0117 14:38:43.912034 2779 manager.go:119] openshift-default-scc-manager completed Jan 17 14:38:43 edgenius microshift[2779]: microshift-mdns-controller I0117 14:38:43.912053 2779 manager.go:114] Starting microshift-mdns-controller Jan 17 14:38:43 edgenius microshift[2779]: microshift-mdns-controller I0117 14:38:43.912973 2779 controller.go:67] mDNS: Starting server on interface "lo", NodeIP "172.27.117.179", NodeName "edgenius" Jan 17 14:38:43 edgenius microshift[2779]: microshift-mdns-controller I0117 14:38:43.913426 2779 controller.go:67] mDNS: Starting server on interface "eth0", NodeIP "172.27.117.179", NodeName "edgenius" Jan 17 14:38:43 edgenius microshift[2779]: microshift-mdns-controller I0117 14:38:43.913510 2779 controller.go:67] mDNS: Starting server on interface "br-ex", NodeIP "172.27.117.179", NodeName "edgenius" Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.913518 2779 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-separate-sa-role.yaml Jan 17 14:38:43 edgenius microshift[2779]: microshift-mdns-controller I0117 14:38:43.913700 2779 routes.go:30] Starting MicroShift mDNS route watcher Jan 17 14:38:43 edgenius microshift[2779]: microshift-mdns-controller I0117 14:38:43.914220 2779 routes.go:73] mDNS: waiting for route API to be ready Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.915393 2779 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-leader-rolebinding.yaml Jan 17 14:38:43 edgenius microshift[2779]: microshift-mdns-controller I0117 14:38:43.916706 2779 routes.go:87] mDNS: Route API ready, watching routers Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.916714 2779 rbac.go:144] Applying rbac controllers/route-controller-manager/route-controller-separate-sa-rolebinding.yaml Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.918569 2779 core.go:170] Applying corev1 api controllers/route-controller-manager/route-controller-sa.yaml Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.920441 2779 controller_manager.go:26] Starting controllers on 0.0.0.0:8445 (unknown) Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.921043 2779 leaderelection.go:248] attempting to acquire leader lease openshift-route-controller-manager/openshift-route-controllers... Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.921129 2779 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8445 Jan 17 14:38:43 edgenius microshift[2779]: kube-apiserver I0117 14:38:43.928296 2779 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.929394 2779 leaderelection.go:258] successfully acquired lease openshift-route-controller-manager/openshift-route-controllers Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager W0117 14:38:43.929699 2779 route.go:78] "openshift.io/ingress-ip" is disabled Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.929791 2779 event.go:285] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-route-controller-manager", Name:"openshift-route-controllers", UID:"b19d30bb-354d-4424-a57a-2ae467f69d37", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"691335", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' edgenius became leader Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.931116 2779 ingress.go:262] ingress-to-route metrics registered with prometheus Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.931144 2779 route.go:91] Started "openshift.io/ingress-to-route" Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.931151 2779 route.go:93] Started Route Controllers Jan 17 14:38:43 edgenius microshift[2779]: route-controller-manager I0117 14:38:43.931305 2779 ingress.go:313] Starting controller Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.967837 2779 base_controller.go:73] Caches are synced for namespace-security-allocation-controller Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.967883 2779 base_controller.go:110] Starting #1 worker of namespace-security-allocation-controller controller ... Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.969218 2779 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_csr-approver-controller Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.969249 2779 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.973974 2779 base_controller.go:73] Caches are synced for pod-security-admission-label-synchronization-controller Jan 17 14:38:43 edgenius microshift[2779]: cluster-policy-controller I0117 14:38:43.973991 2779 base_controller.go:110] Starting #1 worker of pod-security-admission-label-synchronization-controller controller ... Jan 17 14:38:45 edgenius microshift[2779]: kube-apiserver W0117 14:38:45.705288 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:38:45 edgenius microshift[2779]: kube-apiserver E0117 14:38:45.705339 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:38:47 edgenius microshift[2779]: kube-apiserver W0117 14:38:47.038036 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:38:47 edgenius microshift[2779]: kube-apiserver E0117 14:38:47.038092 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:38:48 edgenius microshift[2779]: route-controller-manager I0117 14:38:48.856790 2779 openshift-route-controller-manager.go:107] route-controller-manager is ready Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.856906 2779 manager.go:114] Starting infrastructure-services-manager Jan 17 14:38:48 edgenius microshift[2779]: kustomizer I0117 14:38:48.856957 2779 manager.go:114] Starting kustomizer Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.857020 2779 manager.go:114] Starting kubelet Jan 17 14:38:48 edgenius microshift[2779]: kustomizer I0117 14:38:48.857032 2779 apply.go:64] No kustomization found at /usr/lib/microshift/manifests/kustomization.yaml Jan 17 14:38:48 edgenius microshift[2779]: kustomizer I0117 14:38:48.857044 2779 apply.go:64] No kustomization found at /etc/microshift/manifests/kustomization.yaml Jan 17 14:38:48 edgenius microshift[2779]: kustomizer I0117 14:38:48.857049 2779 manager.go:119] kustomizer completed Jan 17 14:38:48 edgenius microshift[2779]: version-manager I0117 14:38:48.857054 2779 manager.go:114] Starting version-manager Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.862928 2779 rbac.go:144] Applying rbac controllers/kube-controller-manager/csr_approver_clusterrole.yaml Jan 17 14:38:48 edgenius microshift[2779]: version-manager I0117 14:38:48.865021 2779 manager.go:119] version-manager completed Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.865020 2779 rbac.go:144] Applying rbac controllers/cluster-policy-controller/namespace-security-allocation-controller-clusterrole.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.867684 2779 rbac.go:144] Applying rbac controllers/cluster-policy-controller/podsecurity-admission-label-syncer-controller-clusterrole.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.870983 2779 rbac.go:144] Applying rbac controllers/kube-controller-manager/csr_approver_clusterrolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.873057 2779 rbac.go:144] Applying rbac controllers/cluster-policy-controller/namespace-security-allocation-controller-clusterrolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.875182 2779 rbac.go:144] Applying rbac controllers/cluster-policy-controller/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.877893 2779 scheduling.go:77] Applying PriorityClass CR core/priority-class-openshift-user-critical.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.880429 2779 core.go:170] Applying corev1 api components/service-ca/ns.yaml Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.881332 2779 server.go:413] "Kubelet version" kubeletVersion="v1.25.0" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.881386 2779 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 14:38:48 edgenius microshift[2779]: kubelet W0117 14:38:48.881448 2779 feature_gate.go:238] Setting GA feature gate PodSecurity=true. It will be removed in a future release. Jan 17 14:38:48 edgenius microshift[2779]: kubelet W0117 14:38:48.881516 2779 feature_gate.go:238] Setting GA feature gate PodSecurity=true. It will be removed in a future release. Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.882680 2779 rbac.go:144] Applying rbac components/service-ca/clusterrolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.883469 2779 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/kubelet-ca.crt" Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.885559 2779 rbac.go:144] Applying rbac components/service-ca/clusterrole.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.888077 2779 rbac.go:144] Applying rbac components/service-ca/rolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.891008 2779 rbac.go:144] Applying rbac components/service-ca/role.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.893504 2779 core.go:170] Applying corev1 api components/service-ca/sa.yaml Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.893609 2779 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.894866 2779 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.894950 2779 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/system.slice/crio.service SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.894970 2779 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.894981 2779 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.895132 2779 state_mem.go:36] "Initialized new in-memory state store" Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.902601 2779 apps.go:94] Applying apps api components/service-ca/deployment.yaml Jan 17 14:38:48 edgenius microshift[2779]: kube-apiserver I0117 14:38:48.911208 2779 controller.go:616] quota admission added evaluator for: deployments.apps Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.913039 2779 recorder_logging.go:44] &Event{ObjectMeta:{dummy.173b1f81d0d291ca dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DeploymentUpdated,Message:Updated Deployment.apps/service-ca -n openshift-service-ca because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-01-17 14:38:48.912974282 +0000 UTC m=+30.118566829,LastTimestamp:2023-01-17 14:38:48.912974282 +0000 UTC m=+30.118566829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.914170 2779 storage.go:69] Applying sc components/odf-lvm/topolvm_default-storage-class.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.916897 2779 storage.go:126] Applying csiDriver components/odf-lvm/csi-driver.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.919482 2779 core.go:170] Applying corev1 api components/odf-lvm/topolvm-openshift-storage_namespace.yaml Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.920951 2779 kubelet.go:393] "Attempting to sync node with API server" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.920991 2779 kubelet.go:293] "Adding apiserver pod source" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.921009 2779 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.921861 2779 core.go:170] Applying corev1 api components/odf-lvm/topolvm-node_v1_serviceaccount.yaml Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.922172 2779 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="cri-o" version="1.25.1-5.rhaos4.12.git6005903.el8" apiVersion="v1" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.923669 2779 server.go:1175] "Started kubelet" Jan 17 14:38:48 edgenius microshift[2779]: kubelet E0117 14:38:48.925903 2779 kubelet.go:1333] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.926444 2779 core.go:170] Applying corev1 api components/odf-lvm/topolvm-controller_v1_serviceaccount.yaml Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.926772 2779 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.927257 2779 server.go:438] "Adding debug handlers to kubelet server" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.929039 2779 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.931477 2779 volume_manager.go:293] "Starting Kubelet Volume Manager" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.931542 2779 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.933615 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-controller_rbac.authorization.k8s.io_v1_role.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.935991 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_role.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.941273 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_role.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.946796 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-controller_rbac.authorization.k8s.io_v1_rolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.948204 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_rolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.954522 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_rolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.958299 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_clusterrole.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.960323 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-controller_rbac.authorization.k8s.io_v1_clusterrole.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.962542 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_clusterrole.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.964379 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-node-scc_rbac.authorization.k8s.io_v1_clusterrole.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.965887 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-node_rbac.authorization.k8s.io_v1_clusterrole.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.967955 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-controller_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.969329 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.971097 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.977082 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-node-scc_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.980023 2779 rbac.go:144] Applying rbac components/odf-lvm/topolvm-node_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.984311 2779 core.go:170] Applying corev1 api components/odf-lvm/topolvm-lvmd-config_configmap_v1.yaml Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.984785 2779 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.984806 2779 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.984833 2779 state_mem.go:36] "Initialized new in-memory state store" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.985209 2779 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.985225 2779 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.985236 2779 policy_none.go:49] "None policy: Start" Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.986471 2779 apps.go:94] Applying apps api components/odf-lvm/topolvm-controller_deployment.yaml Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.987163 2779 memory_manager.go:168] "Starting memorymanager" policy="None" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.987184 2779 state_mem.go:35] "Initializing new in-memory state store" Jan 17 14:38:48 edgenius microshift[2779]: kubelet I0117 14:38:48.988420 2779 state_mem.go:75] "Updated machine memory state" Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.993889 2779 recorder_logging.go:44] &Event{ObjectMeta:{dummy.173b1f81d5a49cfc dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DeploymentUpdated,Message:Updated Deployment.apps/topolvm-controller -n openshift-storage because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-01-17 14:38:48.993848572 +0000 UTC m=+30.199441019,LastTimestamp:2023-01-17 14:38:48.993848572 +0000 UTC m=+30.199441019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Jan 17 14:38:48 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:48.994784 2779 apps.go:94] Applying apps api components/odf-lvm/topolvm-node_daemonset.yaml Jan 17 14:38:49 edgenius microshift[2779]: kube-apiserver I0117 14:38:49.003695 2779 controller.go:616] quota admission added evaluator for: daemonsets.apps Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.005691 2779 recorder_logging.go:44] &Event{ObjectMeta:{dummy.173b1f81d658a700 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetUpdated,Message:Updated DaemonSet.apps/topolvm-node -n openshift-storage because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-01-17 14:38:49.005647616 +0000 UTC m=+30.211240063,LastTimestamp:2023-01-17 14:38:49.005647616 +0000 UTC m=+30.211240063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.006401 2779 scc.go:87] Applying scc api components/odf-lvm/topolvm-node-securitycontextconstraint.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.008675 2779 core.go:170] Applying corev1 api components/openshift-router/namespace.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.010837 2779 rbac.go:144] Applying rbac components/openshift-router/cluster-role.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.012726 2779 rbac.go:144] Applying rbac components/openshift-router/cluster-role-binding.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.014703 2779 core.go:170] Applying corev1 api components/openshift-router/service-account.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.017110 2779 core.go:170] Applying corev1 api components/openshift-router/configmap.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.021994 2779 recorder_logging.go:44] &Event{ObjectMeta:{dummy.173b1f81d75173f7 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapUpdated,Message:Updated ConfigMap/service-ca-bundle -n openshift-ingress: Jan 17 14:38:49 edgenius microshift[2779]: cause by changes in data.service-ca.crt,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-01-17 14:38:49.021953015 +0000 UTC m=+30.227545562,LastTimestamp:2023-01-17 14:38:49.021953015 +0000 UTC m=+30.227545562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.022569 2779 core.go:170] Applying corev1 api components/openshift-router/service-internal.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.027308 2779 apps.go:94] Applying apps api components/openshift-router/deployment.yaml Jan 17 14:38:49 edgenius microshift[2779]: kubelet I0117 14:38:49.031519 2779 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24" Jan 17 14:38:49 edgenius microshift[2779]: kubelet I0117 14:38:49.032157 2779 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24" Jan 17 14:38:49 edgenius microshift[2779]: kubelet I0117 14:38:49.032562 2779 kubelet_node_status.go:72] "Attempting to register node" node="edgenius" Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.037596 2779 recorder_logging.go:44] &Event{ObjectMeta:{dummy.173b1f81d83f3e26 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DeploymentUpdated,Message:Updated Deployment.apps/router-default -n openshift-ingress because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-01-17 14:38:49.037536806 +0000 UTC m=+30.243129253,LastTimestamp:2023-01-17 14:38:49.037536806 +0000 UTC m=+30.243129253,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.038351 2779 core.go:170] Applying corev1 api components/openshift-dns/dns/namespace.yaml Jan 17 14:38:49 edgenius microshift[2779]: kubelet I0117 14:38:49.042956 2779 kubelet_node_status.go:110] "Node was previously registered" node="edgenius" Jan 17 14:38:49 edgenius microshift[2779]: kubelet I0117 14:38:49.043162 2779 kubelet_node_status.go:75] "Successfully registered node" node="edgenius" Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.044233 2779 core.go:170] Applying corev1 api components/openshift-dns/dns/service.yaml Jan 17 14:38:49 edgenius microshift[2779]: kubelet I0117 14:38:49.044594 2779 setters.go:545] "Node became not ready" node="edgenius" condition={Type:Ready Status:False LastHeartbeatTime:2023-01-17 14:38:49.044561792 +0000 UTC m=+30.250154239 LastTransitionTime:2023-01-17 14:38:49.044561792 +0000 UTC m=+30.250154239 Reason:KubeletNotReady Message:[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]} Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.046403 2779 rbac.go:144] Applying rbac components/openshift-dns/dns/cluster-role.yaml Jan 17 14:38:49 edgenius microshift[2779]: kubelet I0117 14:38:49.047719 2779 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.048813 2779 rbac.go:144] Applying rbac components/openshift-dns/dns/cluster-role-binding.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.051243 2779 core.go:170] Applying corev1 api components/openshift-dns/dns/service-account.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.055137 2779 core.go:170] Applying corev1 api components/openshift-dns/node-resolver/service-account.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.056860 2779 core.go:170] Applying corev1 api components/openshift-dns/dns/configmap.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.062480 2779 apps.go:94] Applying apps api components/openshift-dns/dns/daemonset.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.070337 2779 recorder_logging.go:44] &Event{ObjectMeta:{dummy.173b1f81da332333 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetUpdated,Message:Updated DaemonSet.apps/dns-default -n openshift-dns because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-01-17 14:38:49.070297907 +0000 UTC m=+30.275890454,LastTimestamp:2023-01-17 14:38:49.070297907 +0000 UTC m=+30.275890454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.070470 2779 apps.go:94] Applying apps api components/openshift-dns/node-resolver/daemonset.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.081549 2779 recorder_logging.go:44] &Event{ObjectMeta:{dummy.173b1f81daddd8e8 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetUpdated,Message:Updated DaemonSet.apps/node-resolver -n openshift-dns because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-01-17 14:38:49.081485544 +0000 UTC m=+30.287077991,LastTimestamp:2023-01-17 14:38:49.081485544 +0000 UTC m=+30.287077991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.081595 2779 ovn.go:67] OVNKubernetes config file not found, assuming default values Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.082152 2779 core.go:170] Applying corev1 api components/ovn/namespace.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.084889 2779 core.go:170] Applying corev1 api components/ovn/node/serviceaccount.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.086908 2779 core.go:170] Applying corev1 api components/ovn/master/serviceaccount.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.089309 2779 rbac.go:144] Applying rbac components/ovn/role.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.091938 2779 rbac.go:144] Applying rbac components/ovn/rolebinding.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.094575 2779 rbac.go:144] Applying rbac components/ovn/clusterrole.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.097357 2779 rbac.go:144] Applying rbac components/ovn/clusterrolebinding.yaml Jan 17 14:38:49 edgenius microshift[2779]: kubelet I0117 14:38:49.098557 2779 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Jan 17 14:38:49 edgenius microshift[2779]: kubelet I0117 14:38:49.098601 2779 status_manager.go:161] "Starting to sync pod status with apiserver" Jan 17 14:38:49 edgenius microshift[2779]: kubelet I0117 14:38:49.098619 2779 kubelet.go:2033] "Starting kubelet main sync loop" Jan 17 14:38:49 edgenius microshift[2779]: kubelet E0117 14:38:49.098719 2779 kubelet.go:2057] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.100431 2779 core.go:170] Applying corev1 api components/ovn/configmap.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.102984 2779 apps.go:94] Applying apps api components/ovn/master/daemonset.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.113539 2779 recorder_logging.go:44] &Event{ObjectMeta:{dummy.173b1f81dcc6540b dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetUpdated,Message:Updated DaemonSet.apps/ovnkube-master -n openshift-ovn-kubernetes because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-01-17 14:38:49.113498635 +0000 UTC m=+30.319091082,LastTimestamp:2023-01-17 14:38:49.113498635 +0000 UTC m=+30.319091082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.113570 2779 apps.go:94] Applying apps api components/ovn/node/daemonset.yaml Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.120463 2779 recorder_logging.go:44] &Event{ObjectMeta:{dummy.173b1f81dd2ff370 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetUpdated,Message:Updated DaemonSet.apps/ovnkube-node -n openshift-ovn-kubernetes because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2023-01-17 14:38:49.12042072 +0000 UTC m=+30.326013167,LastTimestamp:2023-01-17 14:38:49.12042072 +0000 UTC m=+30.326013167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.120501 2779 infra-services-controller.go:61] infrastructure-services-manager launched ocp componets Jan 17 14:38:49 edgenius microshift[2779]: infrastructure-services-manager I0117 14:38:49.120526 2779 manager.go:119] infrastructure-services-manager completed Jan 17 14:38:49 edgenius microshift[2779]: kubelet E0117 14:38:49.199688 2779 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 14:38:49 edgenius microshift[2779]: kubelet E0117 14:38:49.400160 2779 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 14:38:49 edgenius microshift[2779]: kubelet E0117 14:38:49.801326 2779 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 14:38:49 edgenius microshift[2779]: kubelet I0117 14:38:49.922017 2779 apiserver.go:52] "Watching apiserver" Jan 17 14:38:50 edgenius microshift[2779]: kube-controller-manager E0117 14:38:50.081682 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:38:50 edgenius microshift[2779]: kube-controller-manager I0117 14:38:50.081882 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:38:50 edgenius microshift[2779]: kube-controller-manager I0117 14:38:50.134810 2779 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode. Jan 17 14:38:50 edgenius microshift[2779]: kubelet E0117 14:38:50.601568 2779 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 14:38:52 edgenius microshift[2779]: kubelet E0117 14:38:52.202912 2779 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 14:38:53 edgenius microshift[2779]: kubelet I0117 14:38:53.864028 2779 kubelet.go:153] kubelet is ready Jan 17 14:38:53 edgenius microshift[2779]: ??? I0117 14:38:53.864072 2779 run.go:140] MicroShift is ready Jan 17 14:38:53 edgenius microshift[2779]: ??? I0117 14:38:53.864688 2779 run.go:145] sent sd_notify readiness message Jan 17 14:38:53 edgenius systemd[1]: Started MicroShift. Jan 17 14:38:53 edgenius microshift[2779]: kubelet I0117 14:38:53.885888 2779 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 14:38:53 edgenius microshift[2779]: kubelet I0117 14:38:53.886422 2779 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 14:38:55 edgenius microshift[2779]: kube-controller-manager I0117 14:38:55.135912 2779 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode. Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.404360 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.405133 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.405986 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.406349 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.406578 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.406666 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.406754 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.406894 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.407033 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.407196 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.407258 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.407428 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.407592 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.407785 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.407903 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.408015 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.408236 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.408429 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.408557 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.408639 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.408737 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.408834 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.408915 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.408995 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.409078 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.409177 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.409284 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.409406 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.409500 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.409586 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.409672 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.409760 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.409863 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.410149 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.410270 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.569176 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63af6fe5-f768-4f46-b1c2-c25945b2d5a4-metrics-tls\") pod \"dns-default-jthmx\" (UID: \"63af6fe5-f768-4f46-b1c2-c25945b2d5a4\") " pod="openshift-dns/dns-default-jthmx" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.569427 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdb58\" (UniqueName: \"kubernetes.io/projected/63af6fe5-f768-4f46-b1c2-c25945b2d5a4-kube-api-access-zdb58\") pod \"dns-default-jthmx\" (UID: \"63af6fe5-f768-4f46-b1c2-c25945b2d5a4\") " pod="openshift-dns/dns-default-jthmx" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.569743 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-76f00de9-a1b9-494b-ab95-1f55febf7413\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92\") pod \"edgeconfigurationservice-db54f49b9-89kfz\" (UID: \"4bf035da-7e84-4e00-8a19-818452f8f30d\") " pod="edgenius/edgeconfigurationservice-db54f49b9-89kfz" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.569823 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwkt8\" (UniqueName: \"kubernetes.io/projected/5ba3e342-4e77-4728-8617-6cb001d446b0-kube-api-access-mwkt8\") pod \"edgeauthzpolicystore-5889cf9977-sljcq\" (UID: \"5ba3e342-4e77-4728-8617-6cb001d446b0\") " pod="edgenius/edgeauthzpolicystore-5889cf9977-sljcq" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.569886 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/empty-dir/59eb2370-7941-4982-adde-19ea693c7bf0-certs\") pod \"topolvm-controller-5fc9996875-4hkzw\" (UID: \"59eb2370-7941-4982-adde-19ea693c7bf0\") " pod="openshift-storage/topolvm-controller-5fc9996875-4hkzw" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570033 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smjjh\" (UniqueName: \"kubernetes.io/projected/59eb2370-7941-4982-adde-19ea693c7bf0-kube-api-access-smjjh\") pod \"topolvm-controller-5fc9996875-4hkzw\" (UID: \"59eb2370-7941-4982-adde-19ea693c7bf0\") " pod="openshift-storage/topolvm-controller-5fc9996875-4hkzw" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570119 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/06feb4c2-9a44-433d-b2a1-ab7e76cea2eb-csi-plugin-dir\") pod \"topolvm-node-z7snz\" (UID: \"06feb4c2-9a44-433d-b2a1-ab7e76cea2eb\") " pod="openshift-storage/topolvm-node-z7snz" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570158 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config-dir\" (UniqueName: \"kubernetes.io/configmap/06feb4c2-9a44-433d-b2a1-ab7e76cea2eb-lvmd-config-dir\") pod \"topolvm-node-z7snz\" (UID: \"06feb4c2-9a44-433d-b2a1-ab7e76cea2eb\") " pod="openshift-storage/topolvm-node-z7snz" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570193 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bw6m\" (UniqueName: \"kubernetes.io/projected/a7b376b4-0e44-4940-9bcc-8c9ad42b02d9-kube-api-access-8bw6m\") pod \"edgealarmsubscription-f49965fd-tdd6x\" (UID: \"a7b376b4-0e44-4940-9bcc-8c9ad42b02d9\") " pod="edgenius/edgealarmsubscription-f49965fd-tdd6x" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570236 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/59eb2370-7941-4982-adde-19ea693c7bf0-socket-dir\") pod \"topolvm-controller-5fc9996875-4hkzw\" (UID: \"59eb2370-7941-4982-adde-19ea693c7bf0\") " pod="openshift-storage/topolvm-controller-5fc9996875-4hkzw" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570263 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/06feb4c2-9a44-433d-b2a1-ab7e76cea2eb-pod-volumes-dir\") pod \"topolvm-node-z7snz\" (UID: \"06feb4c2-9a44-433d-b2a1-ab7e76cea2eb\") " pod="openshift-storage/topolvm-node-z7snz" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570319 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/06feb4c2-9a44-433d-b2a1-ab7e76cea2eb-lvmd-socket-dir\") pod \"topolvm-node-z7snz\" (UID: \"06feb4c2-9a44-433d-b2a1-ab7e76cea2eb\") " pod="openshift-storage/topolvm-node-z7snz" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570382 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aa8c4ab6-3c77-4823-b037-ee99ab79a7a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a\") pod \"edgeauthzpolicystore-5889cf9977-sljcq\" (UID: \"5ba3e342-4e77-4728-8617-6cb001d446b0\") " pod="edgenius/edgeauthzpolicystore-5889cf9977-sljcq" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570417 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgealarmsubscription-secret\" (UniqueName: \"kubernetes.io/secret/a7b376b4-0e44-4940-9bcc-8c9ad42b02d9-edgealarmsubscription-secret\") pod \"edgealarmsubscription-f49965fd-tdd6x\" (UID: \"a7b376b4-0e44-4940-9bcc-8c9ad42b02d9\") " pod="edgenius/edgealarmsubscription-f49965fd-tdd6x" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570449 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsp6r\" (UniqueName: \"kubernetes.io/projected/218615a4-d28f-4014-822d-84c6af570fe2-kube-api-access-xsp6r\") pod \"edgesubscriptionservice-6779669c5f-8tgbq\" (UID: \"218615a4-d28f-4014-822d-84c6af570fe2\") " pod="edgenius/edgesubscriptionservice-6779669c5f-8tgbq" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570487 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06feb4c2-9a44-433d-b2a1-ab7e76cea2eb-registration-dir\") pod \"topolvm-node-z7snz\" (UID: \"06feb4c2-9a44-433d-b2a1-ab7e76cea2eb\") " pod="openshift-storage/topolvm-node-z7snz" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570518 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/06feb4c2-9a44-433d-b2a1-ab7e76cea2eb-node-plugin-dir\") pod \"topolvm-node-z7snz\" (UID: \"06feb4c2-9a44-433d-b2a1-ab7e76cea2eb\") " pod="openshift-storage/topolvm-node-z7snz" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570639 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63af6fe5-f768-4f46-b1c2-c25945b2d5a4-config-volume\") pod \"dns-default-jthmx\" (UID: \"63af6fe5-f768-4f46-b1c2-c25945b2d5a4\") " pod="openshift-dns/dns-default-jthmx" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570708 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeauthzpolicystore-secret\" (UniqueName: \"kubernetes.io/secret/5ba3e342-4e77-4728-8617-6cb001d446b0-edgeauthzpolicystore-secret\") pod \"edgeauthzpolicystore-5889cf9977-sljcq\" (UID: \"5ba3e342-4e77-4728-8617-6cb001d446b0\") " pod="edgenius/edgeauthzpolicystore-5889cf9977-sljcq" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570817 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd188fb5-73ca-4f6d-9e09-1f83142f142f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c\") pod \"edgealarmsubscription-f49965fd-tdd6x\" (UID: \"a7b376b4-0e44-4940-9bcc-8c9ad42b02d9\") " pod="edgenius/edgealarmsubscription-f49965fd-tdd6x" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570899 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7ab8728c-5dd2-4404-a896-b0ae5d692ae6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6\") pod \"edgesubscriptionservice-6779669c5f-8tgbq\" (UID: \"218615a4-d28f-4014-822d-84c6af570fe2\") " pod="edgenius/edgesubscriptionservice-6779669c5f-8tgbq" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.570950 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgesubscriptionservice-secret\" (UniqueName: \"kubernetes.io/secret/218615a4-d28f-4014-822d-84c6af570fe2-edgesubscriptionservice-secret\") pod \"edgesubscriptionservice-6779669c5f-8tgbq\" (UID: \"218615a4-d28f-4014-822d-84c6af570fe2\") " pod="edgenius/edgesubscriptionservice-6779669c5f-8tgbq" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.571001 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgtvb\" (UniqueName: \"kubernetes.io/projected/06feb4c2-9a44-433d-b2a1-ab7e76cea2eb-kube-api-access-xgtvb\") pod \"topolvm-node-z7snz\" (UID: \"06feb4c2-9a44-433d-b2a1-ab7e76cea2eb\") " pod="openshift-storage/topolvm-node-z7snz" Jan 17 14:38:55 edgenius microshift[2779]: kubelet W0117 14:38:55.614487 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice: no such file or directory Jan 17 14:38:55 edgenius microshift[2779]: kubelet W0117 14:38:55.693815 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice: no such file or directory Jan 17 14:38:55 edgenius microshift[2779]: kubelet W0117 14:38:55.712598 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice: no such file or directory Jan 17 14:38:55 edgenius microshift[2779]: kube-apiserver I0117 14:38:55.780567 2779 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io Jan 17 14:38:55 edgenius microshift[2779]: kubelet W0117 14:38:55.781950 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice: no such file or directory Jan 17 14:38:55 edgenius microshift[2779]: kube-apiserver I0117 14:38:55.793974 2779 controller.go:616] quota admission added evaluator for: endpoints Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.890918 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad96e39e-ce67-4313-8799-46a1ea2d851b-node-log\") pod \"ovnkube-node-q9fvm\" (UID: \"ad96e39e-ce67-4313-8799-46a1ea2d851b\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9fvm" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893006 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5x8f\" (UniqueName: \"kubernetes.io/projected/29d572ed-b570-4b8f-85a0-2b43f8c5cb08-kube-api-access-g5x8f\") pod \"edgeauditeventservice-6689859d58-vjwz4\" (UID: \"29d572ed-b570-4b8f-85a0-2b43f8c5cb08\") " pod="edgenius/edgeauditeventservice-6689859d58-vjwz4" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893177 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzldt\" (UniqueName: \"kubernetes.io/projected/7b6117c6-dfd7-494a-926c-e8b49e228960-kube-api-access-nzldt\") pod \"node-resolver-gpptb\" (UID: \"7b6117c6-dfd7-494a-926c-e8b49e228960\") " pod="openshift-dns/node-resolver-gpptb" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893318 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeauthenticationserver-secret\" (UniqueName: \"kubernetes.io/secret/99254a2a-f130-4eaa-bae0-8f40af8082d8-edgeauthenticationserver-secret\") pod \"edgeauthenticationserver-77667796cc-wz8px\" (UID: \"99254a2a-f130-4eaa-bae0-8f40af8082d8\") " pod="edgenius/edgeauthenticationserver-77667796cc-wz8px" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893378 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeauthadminapiconfig\" (UniqueName: \"kubernetes.io/projected/f2d63a65-c662-4d47-a073-0f1184f6ed0f-edgeauthadminapiconfig\") pod \"edgeauthadminapi-649c48bb6b-r4ndx\" (UID: \"f2d63a65-c662-4d47-a073-0f1184f6ed0f\") " pod="edgenius/edgeauthadminapi-649c48bb6b-r4ndx" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893397 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7b6117c6-dfd7-494a-926c-e8b49e228960-hosts-file\") pod \"node-resolver-gpptb\" (UID: \"7b6117c6-dfd7-494a-926c-e8b49e228960\") " pod="openshift-dns/node-resolver-gpptb" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893446 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-53896aae-aac4-45d7-b18f-139128576f5f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746\") pod \"edgeinfomodel-6db75d77ff-cxgnw\" (UID: \"a583cdea-1a90-4331-ac02-3a01de3fb5b1\") " pod="edgenius/edgeinfomodel-6db75d77ff-cxgnw" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893468 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fxpx\" (UniqueName: \"kubernetes.io/projected/99254a2a-f130-4eaa-bae0-8f40af8082d8-kube-api-access-8fxpx\") pod \"edgeauthenticationserver-77667796cc-wz8px\" (UID: \"99254a2a-f130-4eaa-bae0-8f40af8082d8\") " pod="edgenius/edgeauthenticationserver-77667796cc-wz8px" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893529 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad96e39e-ce67-4313-8799-46a1ea2d851b-var-lib-openvswitch\") pod \"ovnkube-node-q9fvm\" (UID: \"ad96e39e-ce67-4313-8799-46a1ea2d851b\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9fvm" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893553 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-908083bb-d8d3-4ef4-90b9-e3de096e451d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6\") pod \"edgeconfigurationservice-db54f49b9-89kfz\" (UID: \"4bf035da-7e84-4e00-8a19-818452f8f30d\") " pod="edgenius/edgeconfigurationservice-db54f49b9-89kfz" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893577 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a17ea07b-2966-4a2e-8974-0eb5533ce070-signing-key\") pod \"service-ca-77fc4cc659-hskw7\" (UID: \"a17ea07b-2966-4a2e-8974-0eb5533ce070\") " pod="openshift-service-ca/service-ca-77fc4cc659-hskw7" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893621 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9x56\" (UniqueName: \"kubernetes.io/projected/a17ea07b-2966-4a2e-8974-0eb5533ce070-kube-api-access-n9x56\") pod \"service-ca-77fc4cc659-hskw7\" (UID: \"a17ea07b-2966-4a2e-8974-0eb5533ce070\") " pod="openshift-service-ca/service-ca-77fc4cc659-hskw7" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893653 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad96e39e-ce67-4313-8799-46a1ea2d851b-etc-openvswitch\") pod \"ovnkube-node-q9fvm\" (UID: \"ad96e39e-ce67-4313-8799-46a1ea2d851b\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9fvm" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893673 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-768xf\" (UniqueName: \"kubernetes.io/projected/ad96e39e-ce67-4313-8799-46a1ea2d851b-kube-api-access-768xf\") pod \"ovnkube-node-q9fvm\" (UID: \"ad96e39e-ce67-4313-8799-46a1ea2d851b\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9fvm" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893716 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4p7s\" (UniqueName: \"kubernetes.io/projected/f2d63a65-c662-4d47-a073-0f1184f6ed0f-kube-api-access-z4p7s\") pod \"edgeauthadminapi-649c48bb6b-r4ndx\" (UID: \"f2d63a65-c662-4d47-a073-0f1184f6ed0f\") " pod="edgenius/edgeauthadminapi-649c48bb6b-r4ndx" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893802 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeconfigurationservice-secret\" (UniqueName: \"kubernetes.io/secret/4bf035da-7e84-4e00-8a19-818452f8f30d-edgeconfigurationservice-secret\") pod \"edgeconfigurationservice-db54f49b9-89kfz\" (UID: \"4bf035da-7e84-4e00-8a19-818452f8f30d\") " pod="edgenius/edgeconfigurationservice-db54f49b9-89kfz" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893836 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeinfomodel-secret\" (UniqueName: \"kubernetes.io/secret/a583cdea-1a90-4331-ac02-3a01de3fb5b1-edgeinfomodel-secret\") pod \"edgeinfomodel-6db75d77ff-cxgnw\" (UID: \"a583cdea-1a90-4331-ac02-3a01de3fb5b1\") " pod="edgenius/edgeinfomodel-6db75d77ff-cxgnw" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893861 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5prv\" (UniqueName: \"kubernetes.io/projected/b86ab6ef-df28-455c-9373-4a4832000498-kube-api-access-g5prv\") pod \"cert-manager-99bb69456-jkdb4\" (UID: \"b86ab6ef-df28-455c-9373-4a4832000498\") " pod="cert-manager/cert-manager-99bb69456-jkdb4" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893897 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7a2b2701-6804-4389-89bd-96ed13312f78\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d\") pod \"edgeauthenticationserver-77667796cc-wz8px\" (UID: \"99254a2a-f130-4eaa-bae0-8f40af8082d8\") " pod="edgenius/edgeauthenticationserver-77667796cc-wz8px" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893927 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad96e39e-ce67-4313-8799-46a1ea2d851b-run-openvswitch\") pod \"ovnkube-node-q9fvm\" (UID: \"ad96e39e-ce67-4313-8799-46a1ea2d851b\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9fvm" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.893987 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad96e39e-ce67-4313-8799-46a1ea2d851b-log-socket\") pod \"ovnkube-node-q9fvm\" (UID: \"ad96e39e-ce67-4313-8799-46a1ea2d851b\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9fvm" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894025 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad96e39e-ce67-4313-8799-46a1ea2d851b-run-ovn\") pod \"ovnkube-node-q9fvm\" (UID: \"ad96e39e-ce67-4313-8799-46a1ea2d851b\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9fvm" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894044 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad96e39e-ce67-4313-8799-46a1ea2d851b-env-overrides\") pod \"ovnkube-node-q9fvm\" (UID: \"ad96e39e-ce67-4313-8799-46a1ea2d851b\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9fvm" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894066 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeauditeventservice-secret\" (UniqueName: \"kubernetes.io/secret/29d572ed-b570-4b8f-85a0-2b43f8c5cb08-edgeauditeventservice-secret\") pod \"edgeauditeventservice-6689859d58-vjwz4\" (UID: \"29d572ed-b570-4b8f-85a0-2b43f8c5cb08\") " pod="edgenius/edgeauditeventservice-6689859d58-vjwz4" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894102 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b7796e6c-734d-4051-bc6f-80b44887ce39\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d\") pod \"edgetyperegistrydb-56b49789c7-xrvhf\" (UID: \"daea4ebe-2c14-4dc4-83de-a4c37d005b23\") " pod="edgenius/edgetyperegistrydb-56b49789c7-xrvhf" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894140 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j67d\" (UniqueName: \"kubernetes.io/projected/deb6d14e-2b25-41bf-a1ab-8b111acd0e78-kube-api-access-5j67d\") pod \"edgerouter-5d74457fcf-k7hlw\" (UID: \"deb6d14e-2b25-41bf-a1ab-8b111acd0e78\") " pod="edgenius/edgerouter-5d74457fcf-k7hlw" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894160 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-833494e0-e898-4273-bdf5-4407a1a16caf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d\") pod \"edgeauditeventservice-6689859d58-vjwz4\" (UID: \"29d572ed-b570-4b8f-85a0-2b43f8c5cb08\") " pod="edgenius/edgeauditeventservice-6689859d58-vjwz4" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894217 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cnhg\" (UniqueName: \"kubernetes.io/projected/4bf035da-7e84-4e00-8a19-818452f8f30d-kube-api-access-2cnhg\") pod \"edgeconfigurationservice-db54f49b9-89kfz\" (UID: \"4bf035da-7e84-4e00-8a19-818452f8f30d\") " pod="edgenius/edgeconfigurationservice-db54f49b9-89kfz" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894244 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c86e97d4-9d02-47ee-840f-6c3236b9bb20\" (UniqueName: \"kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2\") pod \"edgerouter-5d74457fcf-k7hlw\" (UID: \"deb6d14e-2b25-41bf-a1ab-8b111acd0e78\") " pod="edgenius/edgerouter-5d74457fcf-k7hlw" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894269 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8343fecb-6ce6-4780-80cb-9684ff788e87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4\") pod \"edgeauthadminapi-649c48bb6b-r4ndx\" (UID: \"f2d63a65-c662-4d47-a073-0f1184f6ed0f\") " pod="edgenius/edgeauthadminapi-649c48bb6b-r4ndx" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894319 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b6m2\" (UniqueName: \"kubernetes.io/projected/a583cdea-1a90-4331-ac02-3a01de3fb5b1-kube-api-access-8b6m2\") pod \"edgeinfomodel-6db75d77ff-cxgnw\" (UID: \"a583cdea-1a90-4331-ac02-3a01de3fb5b1\") " pod="edgenius/edgeinfomodel-6db75d77ff-cxgnw" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894345 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"idp-certificate\" (UniqueName: \"kubernetes.io/projected/99254a2a-f130-4eaa-bae0-8f40af8082d8-idp-certificate\") pod \"edgeauthenticationserver-77667796cc-wz8px\" (UID: \"99254a2a-f130-4eaa-bae0-8f40af8082d8\") " pod="edgenius/edgeauthenticationserver-77667796cc-wz8px" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894381 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a17ea07b-2966-4a2e-8974-0eb5533ce070-signing-cabundle\") pod \"service-ca-77fc4cc659-hskw7\" (UID: \"a17ea07b-2966-4a2e-8974-0eb5533ce070\") " pod="openshift-service-ca/service-ca-77fc4cc659-hskw7" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894400 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgerouter-secret\" (UniqueName: \"kubernetes.io/secret/deb6d14e-2b25-41bf-a1ab-8b111acd0e78-edgerouter-secret\") pod \"edgerouter-5d74457fcf-k7hlw\" (UID: \"deb6d14e-2b25-41bf-a1ab-8b111acd0e78\") " pod="edgenius/edgerouter-5d74457fcf-k7hlw" Jan 17 14:38:55 edgenius microshift[2779]: kubelet I0117 14:38:55.894421 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeauthadminapi-secret\" (UniqueName: \"kubernetes.io/secret/f2d63a65-c662-4d47-a073-0f1184f6ed0f-edgeauthadminapi-secret\") pod \"edgeauthadminapi-649c48bb6b-r4ndx\" (UID: \"f2d63a65-c662-4d47-a073-0f1184f6ed0f\") " pod="edgenius/edgeauthadminapi-649c48bb6b-r4ndx" Jan 17 14:38:55 edgenius microshift[2779]: kubelet E0117 14:38:55.896036 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.39602221 +0000 UTC m=+37.601614657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-aa8c4ab6-3c77-4823-b037-ee99ab79a7a5" (UniqueName: "kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a") pod "edgeauthzpolicystore-5889cf9977-sljcq" (UID: "5ba3e342-4e77-4728-8617-6cb001d446b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:55 edgenius microshift[2779]: kubelet E0117 14:38:55.911578 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.4115516 +0000 UTC m=+37.617144147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-cd188fb5-73ca-4f6d-9e09-1f83142f142f" (UniqueName: "kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c") pod "edgealarmsubscription-f49965fd-tdd6x" (UID: "a7b376b4-0e44-4940-9bcc-8c9ad42b02d9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:55 edgenius microshift[2779]: kubelet E0117 14:38:55.922302 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.422286931 +0000 UTC m=+37.627879378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-76f00de9-a1b9-494b-ab95-1f55febf7413" (UniqueName: "kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:55 edgenius microshift[2779]: kubelet E0117 14:38:55.927008 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.426967388 +0000 UTC m=+37.632559835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-7ab8728c-5dd2-4404-a896-b0ae5d692ae6" (UniqueName: "kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6") pod "edgesubscriptionservice-6779669c5f-8tgbq" (UID: "218615a4-d28f-4014-822d-84c6af570fe2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:55 edgenius microshift[2779]: route-controller-manager I0117 14:38:55.931869 2779 log.go:198] http: TLS handshake error from 127.0.0.1:58480: EOF Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:55.994643 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgetyperegistrydb-secret\" (UniqueName: \"kubernetes.io/secret/daea4ebe-2c14-4dc4-83de-a4c37d005b23-edgetyperegistrydb-secret\") pod \"edgetyperegistrydb-56b49789c7-xrvhf\" (UID: \"daea4ebe-2c14-4dc4-83de-a4c37d005b23\") " pod="edgenius/edgetyperegistrydb-56b49789c7-xrvhf" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:55.994897 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a8fb267b-946e-4e11-a0ae-7e7801c4a529\" (UniqueName: \"kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b\") pod \"edgetyperegistrydb-56b49789c7-xrvhf\" (UID: \"daea4ebe-2c14-4dc4-83de-a4c37d005b23\") " pod="edgenius/edgetyperegistrydb-56b49789c7-xrvhf" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:55.995037 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86w9s\" (UniqueName: \"kubernetes.io/projected/daea4ebe-2c14-4dc4-83de-a4c37d005b23-kube-api-access-86w9s\") pod \"edgetyperegistrydb-56b49789c7-xrvhf\" (UID: \"daea4ebe-2c14-4dc4-83de-a4c37d005b23\") " pod="edgenius/edgetyperegistrydb-56b49789c7-xrvhf" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:55.996052 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.496039233 +0000 UTC m=+37.701631680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b7796e6c-734d-4051-bc6f-80b44887ce39" (UniqueName: "kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:55.996572 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.49656064 +0000 UTC m=+37.702153087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-833494e0-e898-4273-bdf5-4407a1a16caf" (UniqueName: "kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d") pod "edgeauditeventservice-6689859d58-vjwz4" (UID: "29d572ed-b570-4b8f-85a0-2b43f8c5cb08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:55.997013 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.496999945 +0000 UTC m=+37.702592392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-c86e97d4-9d02-47ee-840f-6c3236b9bb20" (UniqueName: "kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2") pod "edgerouter-5d74457fcf-k7hlw" (UID: "deb6d14e-2b25-41bf-a1ab-8b111acd0e78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:55.997310 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.497302449 +0000 UTC m=+37.702894896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-8343fecb-6ce6-4780-80cb-9684ff788e87" (UniqueName: "kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4") pod "edgeauthadminapi-649c48bb6b-r4ndx" (UID: "f2d63a65-c662-4d47-a073-0f1184f6ed0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:55.999587 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.499572277 +0000 UTC m=+37.705164824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-53896aae-aac4-45d7-b18f-139128576f5f" (UniqueName: "kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746") pod "edgeinfomodel-6db75d77ff-cxgnw" (UID: "a583cdea-1a90-4331-ac02-3a01de3fb5b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.000339 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.500264985 +0000 UTC m=+37.705857432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-908083bb-d8d3-4ef4-90b9-e3de096e451d" (UniqueName: "kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.009876 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.509864202 +0000 UTC m=+37.715456649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-7a2b2701-6804-4389-89bd-96ed13312f78" (UniqueName: "kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d") pod "edgeauthenticationserver-77667796cc-wz8px" (UID: "99254a2a-f130-4eaa-bae0-8f40af8082d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.095300 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeauthdb-secret\" (UniqueName: \"kubernetes.io/secret/bfb4052a-0189-4b30-8f47-80d4485b5ebf-edgeauthdb-secret\") pod \"edgeauthdb-55f84588f-n9mmq\" (UID: \"bfb4052a-0189-4b30-8f47-80d4485b5ebf\") " pod="edgenius/edgeauthdb-55f84588f-n9mmq" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.095367 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89281368-8fe8-4a9b-be69-3254d7e153e1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b\") pod \"edgeauthdb-55f84588f-n9mmq\" (UID: \"bfb4052a-0189-4b30-8f47-80d4485b5ebf\") " pod="edgenius/edgeauthdb-55f84588f-n9mmq" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.095415 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7f2\" (UniqueName: \"kubernetes.io/projected/bfb4052a-0189-4b30-8f47-80d4485b5ebf-kube-api-access-gb7f2\") pod \"edgeauthdb-55f84588f-n9mmq\" (UID: \"bfb4052a-0189-4b30-8f47-80d4485b5ebf\") " pod="edgenius/edgeauthdb-55f84588f-n9mmq" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.095440 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2c24f91c-5968-49cb-a325-a9514087828b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657\") pod \"edgeauthdb-55f84588f-n9mmq\" (UID: \"bfb4052a-0189-4b30-8f47-80d4485b5ebf\") " pod="edgenius/edgeauthdb-55f84588f-n9mmq" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.095457 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7b5f487d-ae5a-4820-b0fe-d3149828d2a1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73\") pod \"edgeinfomodeldb-5fbd9887d8-hdm8j\" (UID: \"20676429-968e-49f3-81fe-4cf06a875c4e\") " pod="edgenius/edgeinfomodeldb-5fbd9887d8-hdm8j" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.095544 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b499c\" (UniqueName: \"kubernetes.io/projected/b180b903-2cce-4884-b171-abb2042fb354-kube-api-access-b499c\") pod \"cert-manager-cainjector-ffb4747bb-szzhb\" (UID: \"b180b903-2cce-4884-b171-abb2042fb354\") " pod="cert-manager/cert-manager-cainjector-ffb4747bb-szzhb" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.095885 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.595877855 +0000 UTC m=+37.801470302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-a8fb267b-946e-4e11-a0ae-7e7801c4a529" (UniqueName: "kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet W0117 14:38:56.134517 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice: no such file or directory Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.196237 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmlp6\" (UniqueName: \"kubernetes.io/projected/20676429-968e-49f3-81fe-4cf06a875c4e-kube-api-access-zmlp6\") pod \"edgeinfomodeldb-5fbd9887d8-hdm8j\" (UID: \"20676429-968e-49f3-81fe-4cf06a875c4e\") " pod="edgenius/edgeinfomodeldb-5fbd9887d8-hdm8j" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.196282 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6549e31c-5301-4828-8ad7-2312cbc7a4d4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006\") pod \"edgeinfomodeldb-5fbd9887d8-hdm8j\" (UID: \"20676429-968e-49f3-81fe-4cf06a875c4e\") " pod="edgenius/edgeinfomodeldb-5fbd9887d8-hdm8j" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.196301 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeinfomodeldb-secret\" (UniqueName: \"kubernetes.io/secret/20676429-968e-49f3-81fe-4cf06a875c4e-edgeinfomodeldb-secret\") pod \"edgeinfomodeldb-5fbd9887d8-hdm8j\" (UID: \"20676429-968e-49f3-81fe-4cf06a875c4e\") " pod="edgenius/edgeinfomodeldb-5fbd9887d8-hdm8j" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.197074 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.697062193 +0000 UTC m=+37.902654640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-89281368-8fe8-4a9b-be69-3254d7e153e1" (UniqueName: "kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.197149 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.697140993 +0000 UTC m=+37.902733440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-2c24f91c-5968-49cb-a325-a9514087828b" (UniqueName: "kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.197330 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.697322396 +0000 UTC m=+37.902914843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-7b5f487d-ae5a-4820-b0fe-d3149828d2a1" (UniqueName: "kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet W0117 14:38:56.238492 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b6117c6_dfd7_494a_926c_e8b49e228960.slice/crio-32e1288b10b27630647d10677cabd91f2ebe3c9d449b7876a4bb8b7bb2925ecc.scope WatchSource:0}: Error finding container 32e1288b10b27630647d10677cabd91f2ebe3c9d449b7876a4bb8b7bb2925ecc: Status 404 returned error can't find the container with id 32e1288b10b27630647d10677cabd91f2ebe3c9d449b7876a4bb8b7bb2925ecc Jan 17 14:38:56 edgenius microshift[2779]: kubelet W0117 14:38:56.271418 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice: no such file or directory Jan 17 14:38:56 edgenius microshift[2779]: kubelet W0117 14:38:56.281421 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad96e39e_ce67_4313_8799_46a1ea2d851b.slice/crio-282147b234228f3f8af952915594c8ff28de11e43e425be1466f9de5a6cb77eb.scope WatchSource:0}: Error finding container 282147b234228f3f8af952915594c8ff28de11e43e425be1466f9de5a6cb77eb: Status 404 returned error can't find the container with id 282147b234228f3f8af952915594c8ff28de11e43e425be1466f9de5a6cb77eb Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.298543 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.798532734 +0000 UTC m=+38.004125181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-6549e31c-5301-4828-8ad7-2312cbc7a4d4" (UniqueName: "kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.395377 2779 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-jthmx_openshift-dns_63af6fe5-f768-4f46-b1c2-c25945b2d5a4_0(a0783b8070c0226a99e12974b375fdb0bd976f329ce5d5e1eb6f90ef4eb90206): error adding pod openshift-dns_dns-default-jthmx to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.395447 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-jthmx_openshift-dns_63af6fe5-f768-4f46-b1c2-c25945b2d5a4_0(a0783b8070c0226a99e12974b375fdb0bd976f329ce5d5e1eb6f90ef4eb90206): error adding pod openshift-dns_dns-default-jthmx to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="openshift-dns/dns-default-jthmx" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.395493 2779 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-jthmx_openshift-dns_63af6fe5-f768-4f46-b1c2-c25945b2d5a4_0(a0783b8070c0226a99e12974b375fdb0bd976f329ce5d5e1eb6f90ef4eb90206): error adding pod openshift-dns_dns-default-jthmx to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="openshift-dns/dns-default-jthmx" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.395549 2779 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-jthmx_openshift-dns(63af6fe5-f768-4f46-b1c2-c25945b2d5a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-jthmx_openshift-dns(63af6fe5-f768-4f46-b1c2-c25945b2d5a4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-jthmx_openshift-dns_63af6fe5-f768-4f46-b1c2-c25945b2d5a4_0(a0783b8070c0226a99e12974b375fdb0bd976f329ce5d5e1eb6f90ef4eb90206): error adding pod openshift-dns_dns-default-jthmx to CNI network \\\"ovn-kubernetes\\\": plugin type=\\\"ovn-k8s-cni-overlay\\\" name=\\\"ovn-kubernetes\\\" failed (add): failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory\"" pod="openshift-dns/dns-default-jthmx" podUID=63af6fe5-f768-4f46-b1c2-c25945b2d5a4 Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.397812 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1b504a84-7ebc-40cc-b967-923fe4e170c5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7\") pod \"edgedeviceapiorchestrator-ff895dcb9-hwzvr\" (UID: \"fdc0b70a-0540-41c9-b14e-29e5d04d9084\") " pod="edgenius/edgedeviceapiorchestrator-ff895dcb9-hwzvr" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.397865 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhsz\" (UniqueName: \"kubernetes.io/projected/fdc0b70a-0540-41c9-b14e-29e5d04d9084-kube-api-access-gnhsz\") pod \"edgedeviceapiorchestrator-ff895dcb9-hwzvr\" (UID: \"fdc0b70a-0540-41c9-b14e-29e5d04d9084\") " pod="edgenius/edgedeviceapiorchestrator-ff895dcb9-hwzvr" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.397987 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgedeviceapiorchestrator-secret\" (UniqueName: \"kubernetes.io/secret/fdc0b70a-0540-41c9-b14e-29e5d04d9084-edgedeviceapiorchestrator-secret\") pod \"edgedeviceapiorchestrator-ff895dcb9-hwzvr\" (UID: \"fdc0b70a-0540-41c9-b14e-29e5d04d9084\") " pod="edgenius/edgedeviceapiorchestrator-ff895dcb9-hwzvr" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.398557 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.398545157 +0000 UTC m=+38.604137604 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-aa8c4ab6-3c77-4823-b037-ee99ab79a7a5" (UniqueName: "kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a") pod "edgeauthzpolicystore-5889cf9977-sljcq" (UID: "5ba3e342-4e77-4728-8617-6cb001d446b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.433334 2779 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-controller-5fc9996875-4hkzw_openshift-storage_59eb2370-7941-4982-adde-19ea693c7bf0_0(c050582d76f32529109b5891b813c882ffbac5f8dc3d783e9ad90da7d8dfa4fc): error adding pod openshift-storage_topolvm-controller-5fc9996875-4hkzw to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.433395 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-controller-5fc9996875-4hkzw_openshift-storage_59eb2370-7941-4982-adde-19ea693c7bf0_0(c050582d76f32529109b5891b813c882ffbac5f8dc3d783e9ad90da7d8dfa4fc): error adding pod openshift-storage_topolvm-controller-5fc9996875-4hkzw to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="openshift-storage/topolvm-controller-5fc9996875-4hkzw" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.433420 2779 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-controller-5fc9996875-4hkzw_openshift-storage_59eb2370-7941-4982-adde-19ea693c7bf0_0(c050582d76f32529109b5891b813c882ffbac5f8dc3d783e9ad90da7d8dfa4fc): error adding pod openshift-storage_topolvm-controller-5fc9996875-4hkzw to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="openshift-storage/topolvm-controller-5fc9996875-4hkzw" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.433472 2779 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"topolvm-controller-5fc9996875-4hkzw_openshift-storage(59eb2370-7941-4982-adde-19ea693c7bf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"topolvm-controller-5fc9996875-4hkzw_openshift-storage(59eb2370-7941-4982-adde-19ea693c7bf0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-controller-5fc9996875-4hkzw_openshift-storage_59eb2370-7941-4982-adde-19ea693c7bf0_0(c050582d76f32529109b5891b813c882ffbac5f8dc3d783e9ad90da7d8dfa4fc): error adding pod openshift-storage_topolvm-controller-5fc9996875-4hkzw to CNI network \\\"ovn-kubernetes\\\": plugin type=\\\"ovn-k8s-cni-overlay\\\" name=\\\"ovn-kubernetes\\\" failed (add): failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory\"" pod="openshift-storage/topolvm-controller-5fc9996875-4hkzw" podUID=59eb2370-7941-4982-adde-19ea693c7bf0 Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.498453 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-acb2d71c-7ea5-4a37-a32c-d15e95a1991a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2\") pod \"edgeapigateway-6669ccbd5d-jffch\" (UID: \"83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7\") " pod="edgenius/edgeapigateway-6669ccbd5d-jffch" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.498544 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mbnm\" (UniqueName: \"kubernetes.io/projected/83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7-kube-api-access-9mbnm\") pod \"edgeapigateway-6669ccbd5d-jffch\" (UID: \"83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7\") " pod="edgenius/edgeapigateway-6669ccbd5d-jffch" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.498566 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeapigateway-secret\" (UniqueName: \"kubernetes.io/secret/83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7-edgeapigateway-secret\") pod \"edgeapigateway-6669ccbd5d-jffch\" (UID: \"83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7\") " pod="edgenius/edgeapigateway-6669ccbd5d-jffch" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.499121 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.499112888 +0000 UTC m=+38.704705335 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-cd188fb5-73ca-4f6d-9e09-1f83142f142f" (UniqueName: "kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c") pod "edgealarmsubscription-f49965fd-tdd6x" (UID: "a7b376b4-0e44-4940-9bcc-8c9ad42b02d9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.499322 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.49931639 +0000 UTC m=+38.704908937 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-b7796e6c-734d-4051-bc6f-80b44887ce39" (UniqueName: "kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.499506 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:56.999499692 +0000 UTC m=+38.205092239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-1b504a84-7ebc-40cc-b967-923fe4e170c5" (UniqueName: "kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7") pod "edgedeviceapiorchestrator-ff895dcb9-hwzvr" (UID: "fdc0b70a-0540-41c9-b14e-29e5d04d9084") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.499885 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.499877697 +0000 UTC m=+38.705470144 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-833494e0-e898-4273-bdf5-4407a1a16caf" (UniqueName: "kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d") pod "edgeauditeventservice-6689859d58-vjwz4" (UID: "29d572ed-b570-4b8f-85a0-2b43f8c5cb08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.500504 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.500496605 +0000 UTC m=+38.706089052 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-76f00de9-a1b9-494b-ab95-1f55febf7413" (UniqueName: "kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.500850 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.500840509 +0000 UTC m=+38.706433056 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-c86e97d4-9d02-47ee-840f-6c3236b9bb20" (UniqueName: "kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2") pod "edgerouter-5d74457fcf-k7hlw" (UID: "deb6d14e-2b25-41bf-a1ab-8b111acd0e78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.501233 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.501225514 +0000 UTC m=+38.706818061 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-7ab8728c-5dd2-4404-a896-b0ae5d692ae6" (UniqueName: "kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6") pod "edgesubscriptionservice-6779669c5f-8tgbq" (UID: "218615a4-d28f-4014-822d-84c6af570fe2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.501446 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.501437516 +0000 UTC m=+38.707030063 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-8343fecb-6ce6-4780-80cb-9684ff788e87" (UniqueName: "kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4") pod "edgeauthadminapi-649c48bb6b-r4ndx" (UID: "f2d63a65-c662-4d47-a073-0f1184f6ed0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.598867 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f647bd64-9a4d-43d5-adde-a7181f8ba0c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751\") pod \"edgeeventsubscription-7dd9f64c67-rf99m\" (UID: \"c35cebcb-87fe-4857-b3e5-312a2ee55902\") " pod="edgenius/edgeeventsubscription-7dd9f64c67-rf99m" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.598973 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeeventsubscription-secret\" (UniqueName: \"kubernetes.io/secret/c35cebcb-87fe-4857-b3e5-312a2ee55902-edgeeventsubscription-secret\") pod \"edgeeventsubscription-7dd9f64c67-rf99m\" (UID: \"c35cebcb-87fe-4857-b3e5-312a2ee55902\") " pod="edgenius/edgeeventsubscription-7dd9f64c67-rf99m" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.599016 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntnd\" (UniqueName: \"kubernetes.io/projected/c35cebcb-87fe-4857-b3e5-312a2ee55902-kube-api-access-nntnd\") pod \"edgeeventsubscription-7dd9f64c67-rf99m\" (UID: \"c35cebcb-87fe-4857-b3e5-312a2ee55902\") " pod="edgenius/edgeeventsubscription-7dd9f64c67-rf99m" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.599614 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.099604217 +0000 UTC m=+38.305196664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-acb2d71c-7ea5-4a37-a32c-d15e95a1991a" (UniqueName: "kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2") pod "edgeapigateway-6669ccbd5d-jffch" (UID: "83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.599899 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.599891021 +0000 UTC m=+38.805483568 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-53896aae-aac4-45d7-b18f-139128576f5f" (UniqueName: "kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746") pod "edgeinfomodel-6db75d77ff-cxgnw" (UID: "a583cdea-1a90-4331-ac02-3a01de3fb5b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.600201 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.600192924 +0000 UTC m=+38.805785371 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-908083bb-d8d3-4ef4-90b9-e3de096e451d" (UniqueName: "kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.600419 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.600412027 +0000 UTC m=+38.806004474 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-7a2b2701-6804-4389-89bd-96ed13312f78" (UniqueName: "kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d") pod "edgeauthenticationserver-77667796cc-wz8px" (UID: "99254a2a-f130-4eaa-bae0-8f40af8082d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.601282 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.601273637 +0000 UTC m=+38.806866084 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-a8fb267b-946e-4e11-a0ae-7e7801c4a529" (UniqueName: "kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.682535 2779 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-77fc4cc659-hskw7_openshift-service-ca_a17ea07b-2966-4a2e-8974-0eb5533ce070_0(4aab9547209d5a03cde02823c950a4c56a61a11ae79f94288253da7cdbf0c475): error adding pod openshift-service-ca_service-ca-77fc4cc659-hskw7 to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.683918 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-77fc4cc659-hskw7_openshift-service-ca_a17ea07b-2966-4a2e-8974-0eb5533ce070_0(4aab9547209d5a03cde02823c950a4c56a61a11ae79f94288253da7cdbf0c475): error adding pod openshift-service-ca_service-ca-77fc4cc659-hskw7 to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="openshift-service-ca/service-ca-77fc4cc659-hskw7" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.684021 2779 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-77fc4cc659-hskw7_openshift-service-ca_a17ea07b-2966-4a2e-8974-0eb5533ce070_0(4aab9547209d5a03cde02823c950a4c56a61a11ae79f94288253da7cdbf0c475): error adding pod openshift-service-ca_service-ca-77fc4cc659-hskw7 to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="openshift-service-ca/service-ca-77fc4cc659-hskw7" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.684116 2779 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"service-ca-77fc4cc659-hskw7_openshift-service-ca(a17ea07b-2966-4a2e-8974-0eb5533ce070)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"service-ca-77fc4cc659-hskw7_openshift-service-ca(a17ea07b-2966-4a2e-8974-0eb5533ce070)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-77fc4cc659-hskw7_openshift-service-ca_a17ea07b-2966-4a2e-8974-0eb5533ce070_0(4aab9547209d5a03cde02823c950a4c56a61a11ae79f94288253da7cdbf0c475): error adding pod openshift-service-ca_service-ca-77fc4cc659-hskw7 to CNI network \\\"ovn-kubernetes\\\": plugin type=\\\"ovn-k8s-cni-overlay\\\" name=\\\"ovn-kubernetes\\\" failed (add): failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory\"" pod="openshift-service-ca/service-ca-77fc4cc659-hskw7" podUID=a17ea07b-2966-4a2e-8974-0eb5533ce070 Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.699411 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgemethodinvocation-secret\" (UniqueName: \"kubernetes.io/secret/799c5e55-8569-4df9-ad89-16e0d46bb5b7-edgemethodinvocation-secret\") pod \"edgemethodinvocation-7d5cb5d865-mdqtp\" (UID: \"799c5e55-8569-4df9-ad89-16e0d46bb5b7\") " pod="edgenius/edgemethodinvocation-7d5cb5d865-mdqtp" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.701558 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1a2031d-e768-48b5-8951-eae888ae8f91\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac\") pod \"edgemethodinvocation-7d5cb5d865-mdqtp\" (UID: \"799c5e55-8569-4df9-ad89-16e0d46bb5b7\") " pod="edgenius/edgemethodinvocation-7d5cb5d865-mdqtp" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.702393 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrwhf\" (UniqueName: \"kubernetes.io/projected/799c5e55-8569-4df9-ad89-16e0d46bb5b7-kube-api-access-vrwhf\") pod \"edgemethodinvocation-7d5cb5d865-mdqtp\" (UID: \"799c5e55-8569-4df9-ad89-16e0d46bb5b7\") " pod="edgenius/edgemethodinvocation-7d5cb5d865-mdqtp" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.704715 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.204693103 +0000 UTC m=+38.410285550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-f647bd64-9a4d-43d5-adde-a7181f8ba0c1" (UniqueName: "kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751") pod "edgeeventsubscription-7dd9f64c67-rf99m" (UID: "c35cebcb-87fe-4857-b3e5-312a2ee55902") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.705058 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.705045307 +0000 UTC m=+38.910637854 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-89281368-8fe8-4a9b-be69-3254d7e153e1" (UniqueName: "kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.705128 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.705120408 +0000 UTC m=+38.910712855 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-2c24f91c-5968-49cb-a325-a9514087828b" (UniqueName: "kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.709645 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.709630763 +0000 UTC m=+38.915223210 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-7b5f487d-ae5a-4820-b0fe-d3149828d2a1" (UniqueName: "kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.806288 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.306278245 +0000 UTC m=+38.511870792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-c1a2031d-e768-48b5-8951-eae888ae8f91" (UniqueName: "kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac") pod "edgemethodinvocation-7d5cb5d865-mdqtp" (UID: "799c5e55-8569-4df9-ad89-16e0d46bb5b7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.806318 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.806310246 +0000 UTC m=+39.011902693 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-6549e31c-5301-4828-8ad7-2312cbc7a4d4" (UniqueName: "kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.808667 2779 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-node-z7snz_openshift-storage_06feb4c2-9a44-433d-b2a1-ab7e76cea2eb_0(244c34d1996c4f547c6f434198d80f147389c3f6f6d83d280841752233fb542b): error adding pod openshift-storage_topolvm-node-z7snz to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.808705 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-node-z7snz_openshift-storage_06feb4c2-9a44-433d-b2a1-ab7e76cea2eb_0(244c34d1996c4f547c6f434198d80f147389c3f6f6d83d280841752233fb542b): error adding pod openshift-storage_topolvm-node-z7snz to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="openshift-storage/topolvm-node-z7snz" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.808725 2779 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-node-z7snz_openshift-storage_06feb4c2-9a44-433d-b2a1-ab7e76cea2eb_0(244c34d1996c4f547c6f434198d80f147389c3f6f6d83d280841752233fb542b): error adding pod openshift-storage_topolvm-node-z7snz to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="openshift-storage/topolvm-node-z7snz" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.808761 2779 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"topolvm-node-z7snz_openshift-storage(06feb4c2-9a44-433d-b2a1-ab7e76cea2eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"topolvm-node-z7snz_openshift-storage(06feb4c2-9a44-433d-b2a1-ab7e76cea2eb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-node-z7snz_openshift-storage_06feb4c2-9a44-433d-b2a1-ab7e76cea2eb_0(244c34d1996c4f547c6f434198d80f147389c3f6f6d83d280841752233fb542b): error adding pod openshift-storage_topolvm-node-z7snz to CNI network \\\"ovn-kubernetes\\\": plugin type=\\\"ovn-k8s-cni-overlay\\\" name=\\\"ovn-kubernetes\\\" failed (add): failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory\"" pod="openshift-storage/topolvm-node-z7snz" podUID=06feb4c2-9a44-433d-b2a1-ab7e76cea2eb Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.824685 2779 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-99bb69456-jkdb4_cert-manager_b86ab6ef-df28-455c-9373-4a4832000498_0(acdfa31cfe2134469de03bea4044a4e8b135579138f96f70b98d48bb123c874d): error adding pod cert-manager_cert-manager-99bb69456-jkdb4 to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.824746 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-99bb69456-jkdb4_cert-manager_b86ab6ef-df28-455c-9373-4a4832000498_0(acdfa31cfe2134469de03bea4044a4e8b135579138f96f70b98d48bb123c874d): error adding pod cert-manager_cert-manager-99bb69456-jkdb4 to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="cert-manager/cert-manager-99bb69456-jkdb4" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.824772 2779 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-99bb69456-jkdb4_cert-manager_b86ab6ef-df28-455c-9373-4a4832000498_0(acdfa31cfe2134469de03bea4044a4e8b135579138f96f70b98d48bb123c874d): error adding pod cert-manager_cert-manager-99bb69456-jkdb4 to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="cert-manager/cert-manager-99bb69456-jkdb4" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.824814 2779 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-99bb69456-jkdb4_cert-manager(b86ab6ef-df28-455c-9373-4a4832000498)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-99bb69456-jkdb4_cert-manager(b86ab6ef-df28-455c-9373-4a4832000498)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-99bb69456-jkdb4_cert-manager_b86ab6ef-df28-455c-9373-4a4832000498_0(acdfa31cfe2134469de03bea4044a4e8b135579138f96f70b98d48bb123c874d): error adding pod cert-manager_cert-manager-99bb69456-jkdb4 to CNI network \\\"ovn-kubernetes\\\": plugin type=\\\"ovn-k8s-cni-overlay\\\" name=\\\"ovn-kubernetes\\\" failed (add): failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory\"" pod="cert-manager/cert-manager-99bb69456-jkdb4" podUID=b86ab6ef-df28-455c-9373-4a4832000498 Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908368 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-node-log\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908419 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeplatformeventsubscription-secret\" (UniqueName: \"kubernetes.io/secret/30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f-edgeplatformeventsubscription-secret\") pod \"edgeplatformeventsubscription-5699bfd6bf-lhnls\" (UID: \"30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f\") " pod="edgenius/edgeplatformeventsubscription-5699bfd6bf-lhnls" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908434 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-systemd-units\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908448 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-host-run-netns\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908463 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0174cf30-3d73-4cd7-9b85-8b13b36edddd-env-overrides\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908515 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-801fdff1-bb7b-4860-95e8-f0d41ef257b2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4\") pod \"edgeplatformeventsubscription-5699bfd6bf-lhnls\" (UID: \"30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f\") " pod="edgenius/edgeplatformeventsubscription-5699bfd6bf-lhnls" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908545 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0174cf30-3d73-4cd7-9b85-8b13b36edddd-ovnkube-config\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908560 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-kubeconfig\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908583 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-host-run-ovn-kubernetes\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908619 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chx4b\" (UniqueName: \"kubernetes.io/projected/1d981a1f-fbb2-471a-aa8c-4fb30c289628-kube-api-access-chx4b\") pod \"edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d\" (UID: \"1d981a1f-fbb2-471a-aa8c-4fb30c289628\") " pod="edgenius/edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908637 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-run-ovn\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908650 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-host-slash\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908663 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-log-socket\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908676 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882ns\" (UniqueName: \"kubernetes.io/projected/0174cf30-3d73-4cd7-9b85-8b13b36edddd-kube-api-access-882ns\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908700 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sddw7\" (UniqueName: \"kubernetes.io/projected/30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f-kube-api-access-sddw7\") pod \"edgeplatformeventsubscription-5699bfd6bf-lhnls\" (UID: \"30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f\") " pod="edgenius/edgeplatformeventsubscription-5699bfd6bf-lhnls" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908715 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch-node\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-etc-openvswitch-node\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908735 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-host-cni-netd\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908748 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-host-cni-bin\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet I0117 14:38:56.908789 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0174cf30-3d73-4cd7-9b85-8b13b36edddd-run-openvswitch\") pod \"ovnkube-master-hfd5b\" (UID: \"0174cf30-3d73-4cd7-9b85-8b13b36edddd\") " pod="openshift-ovn-kubernetes/ovnkube-master-hfd5b" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.918920 2779 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-ffb4747bb-szzhb_cert-manager_b180b903-2cce-4884-b171-abb2042fb354_0(4efcfb06f03ff89d32620c3a05487ac9ae70c076c6704474a00e7f728649fc89): error adding pod cert-manager_cert-manager-cainjector-ffb4747bb-szzhb to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.918977 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-ffb4747bb-szzhb_cert-manager_b180b903-2cce-4884-b171-abb2042fb354_0(4efcfb06f03ff89d32620c3a05487ac9ae70c076c6704474a00e7f728649fc89): error adding pod cert-manager_cert-manager-cainjector-ffb4747bb-szzhb to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="cert-manager/cert-manager-cainjector-ffb4747bb-szzhb" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.919000 2779 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-ffb4747bb-szzhb_cert-manager_b180b903-2cce-4884-b171-abb2042fb354_0(4efcfb06f03ff89d32620c3a05487ac9ae70c076c6704474a00e7f728649fc89): error adding pod cert-manager_cert-manager-cainjector-ffb4747bb-szzhb to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="cert-manager/cert-manager-cainjector-ffb4747bb-szzhb" Jan 17 14:38:56 edgenius microshift[2779]: kubelet E0117 14:38:56.919044 2779 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-ffb4747bb-szzhb_cert-manager(b180b903-2cce-4884-b171-abb2042fb354)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-ffb4747bb-szzhb_cert-manager(b180b903-2cce-4884-b171-abb2042fb354)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-ffb4747bb-szzhb_cert-manager_b180b903-2cce-4884-b171-abb2042fb354_0(4efcfb06f03ff89d32620c3a05487ac9ae70c076c6704474a00e7f728649fc89): error adding pod cert-manager_cert-manager-cainjector-ffb4747bb-szzhb to CNI network \\\"ovn-kubernetes\\\": plugin type=\\\"ovn-k8s-cni-overlay\\\" name=\\\"ovn-kubernetes\\\" failed (add): failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory\"" pod="cert-manager/cert-manager-cainjector-ffb4747bb-szzhb" podUID=b180b903-2cce-4884-b171-abb2042fb354 Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.009334 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bfccbc83-78d0-46d3-bb90-639eb139f067\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8\") pod \"edgeauthadminui-bbb77c5f7-b5jrg\" (UID: \"b9ab877e-4277-412f-9d76-f3833c0807fc\") " pod="edgenius/edgeauthadminui-bbb77c5f7-b5jrg" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.009395 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw2xb\" (UniqueName: \"kubernetes.io/projected/4af511ec-802a-47ed-8715-fb0f87f76549-kube-api-access-hw2xb\") pod \"edge-broker-7fd9b99b6c-ncxvq\" (UID: \"4af511ec-802a-47ed-8715-fb0f87f76549\") " pod="edgenius/edge-broker-7fd9b99b6c-ncxvq" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.009452 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d32a6692-a6b3-4263-a0af-0c4e0897ea09\" (UniqueName: \"kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39\") pod \"edge-broker-7fd9b99b6c-ncxvq\" (UID: \"4af511ec-802a-47ed-8715-fb0f87f76549\") " pod="edgenius/edge-broker-7fd9b99b6c-ncxvq" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.009495 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edge-broker-secret\" (UniqueName: \"kubernetes.io/secret/4af511ec-802a-47ed-8715-fb0f87f76549-edge-broker-secret\") pod \"edge-broker-7fd9b99b6c-ncxvq\" (UID: \"4af511ec-802a-47ed-8715-fb0f87f76549\") " pod="edgenius/edge-broker-7fd9b99b6c-ncxvq" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.009551 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeauthadminui-secret\" (UniqueName: \"kubernetes.io/secret/b9ab877e-4277-412f-9d76-f3833c0807fc-edgeauthadminui-secret\") pod \"edgeauthadminui-bbb77c5f7-b5jrg\" (UID: \"b9ab877e-4277-412f-9d76-f3833c0807fc\") " pod="edgenius/edgeauthadminui-bbb77c5f7-b5jrg" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.009926 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw4pf\" (UniqueName: \"kubernetes.io/projected/b9ab877e-4277-412f-9d76-f3833c0807fc-kube-api-access-xw4pf\") pod \"edgeauthadminui-bbb77c5f7-b5jrg\" (UID: \"b9ab877e-4277-412f-9d76-f3833c0807fc\") " pod="edgenius/edgeauthadminui-bbb77c5f7-b5jrg" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.010046 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"password-file\" (UniqueName: \"kubernetes.io/projected/4af511ec-802a-47ed-8715-fb0f87f76549-password-file\") pod \"edge-broker-7fd9b99b6c-ncxvq\" (UID: \"4af511ec-802a-47ed-8715-fb0f87f76549\") " pod="edgenius/edge-broker-7fd9b99b6c-ncxvq" Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.010152 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.010144339 +0000 UTC m=+39.215736786 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-1b504a84-7ebc-40cc-b967-923fe4e170c5" (UniqueName: "kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7") pod "edgedeviceapiorchestrator-ff895dcb9-hwzvr" (UID: "fdc0b70a-0540-41c9-b14e-29e5d04d9084") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.010193 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.51018614 +0000 UTC m=+38.715778587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-801fdff1-bb7b-4860-95e8-f0d41ef257b2" (UniqueName: "kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4") pod "edgeplatformeventsubscription-5699bfd6bf-lhnls" (UID: "30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.118407 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.118391964 +0000 UTC m=+39.323984411 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-acb2d71c-7ea5-4a37-a32c-d15e95a1991a" (UniqueName: "kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2") pod "edgeapigateway-6669ccbd5d-jffch" (UID: "83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.118909 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.61889857 +0000 UTC m=+38.824491017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-d32a6692-a6b3-4263-a0af-0c4e0897ea09" (UniqueName: "kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39") pod "edge-broker-7fd9b99b6c-ncxvq" (UID: "4af511ec-802a-47ed-8715-fb0f87f76549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.212609 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgefilestorage-secret\" (UniqueName: \"kubernetes.io/secret/49ac2d99-3c8d-4886-a60e-898d1dfeb9bd-edgefilestorage-secret\") pod \"edgefilestorage-fd95f9fcb-2l447\" (UID: \"49ac2d99-3c8d-4886-a60e-898d1dfeb9bd\") " pod="edgenius/edgefilestorage-fd95f9fcb-2l447" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.212700 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-715180ea-d66b-4c0e-b754-18397f45e045\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9\") pod \"edgefilestorage-fd95f9fcb-2l447\" (UID: \"49ac2d99-3c8d-4886-a60e-898d1dfeb9bd\") " pod="edgenius/edgefilestorage-fd95f9fcb-2l447" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.212745 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq8zm\" (UniqueName: \"kubernetes.io/projected/49ac2d99-3c8d-4886-a60e-898d1dfeb9bd-kube-api-access-jq8zm\") pod \"edgefilestorage-fd95f9fcb-2l447\" (UID: \"49ac2d99-3c8d-4886-a60e-898d1dfeb9bd\") " pod="edgenius/edgefilestorage-fd95f9fcb-2l447" Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.213117 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.213108122 +0000 UTC m=+39.418700569 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-f647bd64-9a4d-43d5-adde-a7181f8ba0c1" (UniqueName: "kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751") pod "edgeeventsubscription-7dd9f64c67-rf99m" (UID: "c35cebcb-87fe-4857-b3e5-312a2ee55902") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.213434 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.713425926 +0000 UTC m=+38.919018473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-bfccbc83-78d0-46d3-bb90-639eb139f067" (UniqueName: "kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8") pod "edgeauthadminui-bbb77c5f7-b5jrg" (UID: "b9ab877e-4277-412f-9d76-f3833c0807fc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet W0117 14:38:57.284043 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0174cf30_3d73_4cd7_9b85_8b13b36edddd.slice/crio-bfd75279168bbe1aa6bdf553d07639ffd75eecf310c238abaf1ac89169f03fec.scope WatchSource:0}: Error finding container bfd75279168bbe1aa6bdf553d07639ffd75eecf310c238abaf1ac89169f03fec: Status 404 returned error can't find the container with id bfd75279168bbe1aa6bdf553d07639ffd75eecf310c238abaf1ac89169f03fec Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.313990 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.313977756 +0000 UTC m=+39.519570203 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-c1a2031d-e768-48b5-8951-eae888ae8f91" (UniqueName: "kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac") pod "edgemethodinvocation-7d5cb5d865-mdqtp" (UID: "799c5e55-8569-4df9-ad89-16e0d46bb5b7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.315359 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:57.815350373 +0000 UTC m=+39.020942820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-715180ea-d66b-4c0e-b754-18397f45e045" (UniqueName: "kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9") pod "edgefilestorage-fd95f9fcb-2l447" (UID: "49ac2d99-3c8d-4886-a60e-898d1dfeb9bd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.413400 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-478365f1-474d-43f9-91d6-8dccc0bc53ec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f\") pod \"edgevariablesubscription-6db6c446c5-vnbbm\" (UID: \"04b78a0a-d69e-4dad-817b-40e4b7b399d2\") " pod="edgenius/edgevariablesubscription-6db6c446c5-vnbbm" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.413478 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgevariablesubscription-secret\" (UniqueName: \"kubernetes.io/secret/04b78a0a-d69e-4dad-817b-40e4b7b399d2-edgevariablesubscription-secret\") pod \"edgevariablesubscription-6db6c446c5-vnbbm\" (UID: \"04b78a0a-d69e-4dad-817b-40e4b7b399d2\") " pod="edgenius/edgevariablesubscription-6db6c446c5-vnbbm" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.413678 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpct9\" (UniqueName: \"kubernetes.io/projected/04b78a0a-d69e-4dad-817b-40e4b7b399d2-kube-api-access-fpct9\") pod \"edgevariablesubscription-6db6c446c5-vnbbm\" (UID: \"04b78a0a-d69e-4dad-817b-40e4b7b399d2\") " pod="edgenius/edgevariablesubscription-6db6c446c5-vnbbm" Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.414382 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.414370585 +0000 UTC m=+40.619963032 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-aa8c4ab6-3c77-4823-b037-ee99ab79a7a5" (UniqueName: "kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a") pod "edgeauthzpolicystore-5889cf9977-sljcq" (UID: "5ba3e342-4e77-4728-8617-6cb001d446b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.501075 2779 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d_edgenius_1d981a1f-fbb2-471a-aa8c-4fb30c289628_0(b08fe3f4a5effab4a08f4f3a39e352c9476fa900f21b8f6827f1cc2e5d16c088): error adding pod edgenius_edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.501156 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d_edgenius_1d981a1f-fbb2-471a-aa8c-4fb30c289628_0(b08fe3f4a5effab4a08f4f3a39e352c9476fa900f21b8f6827f1cc2e5d16c088): error adding pod edgenius_edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="edgenius/edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d" Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.501197 2779 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d_edgenius_1d981a1f-fbb2-471a-aa8c-4fb30c289628_0(b08fe3f4a5effab4a08f4f3a39e352c9476fa900f21b8f6827f1cc2e5d16c088): error adding pod edgenius_edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="edgenius/edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d" Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.501299 2779 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d_edgenius(1d981a1f-fbb2-471a-aa8c-4fb30c289628)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d_edgenius(1d981a1f-fbb2-471a-aa8c-4fb30c289628)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d_edgenius_1d981a1f-fbb2-471a-aa8c-4fb30c289628_0(b08fe3f4a5effab4a08f4f3a39e352c9476fa900f21b8f6827f1cc2e5d16c088): error adding pod edgenius_edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d to CNI network \\\"ovn-kubernetes\\\": plugin type=\\\"ovn-k8s-cni-overlay\\\" name=\\\"ovn-kubernetes\\\" failed (add): failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory\"" pod="edgenius/edgenius-orchestrator-controller-manager-8455c5ddf7-ld25d" podUID=1d981a1f-fbb2-471a-aa8c-4fb30c289628 Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.514212 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgeauthzpolicyserver-secret\" (UniqueName: \"kubernetes.io/secret/8ad2af1d-571d-4352-a43a-8d1511797a50-edgeauthzpolicyserver-secret\") pod \"edgeauthzpolicyserver-5b4966b595-stp8v\" (UID: \"8ad2af1d-571d-4352-a43a-8d1511797a50\") " pod="edgenius/edgeauthzpolicyserver-5b4966b595-stp8v" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.514784 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e1f9cacc-84f1-40f4-ab2b-f27c92d9e23d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7\") pod \"edgeauthzpolicyserver-5b4966b595-stp8v\" (UID: \"8ad2af1d-571d-4352-a43a-8d1511797a50\") " pod="edgenius/edgeauthzpolicyserver-5b4966b595-stp8v" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.514926 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5zvs\" (UniqueName: \"kubernetes.io/projected/2d1fe1e9-5bbb-4c06-be50-091f0795f91d-kube-api-access-q5zvs\") pod \"cert-manager-webhook-545bd5d7d8-lrscs\" (UID: \"2d1fe1e9-5bbb-4c06-be50-091f0795f91d\") " pod="cert-manager/cert-manager-webhook-545bd5d7d8-lrscs" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.515036 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zphh7\" (UniqueName: \"kubernetes.io/projected/73bf4d14-9234-4f3d-a7a1-282e3fc81909-kube-api-access-zphh7\") pod \"router-default-ddc545d88-fnfsb\" (UID: \"73bf4d14-9234-4f3d-a7a1-282e3fc81909\") " pod="openshift-ingress/router-default-ddc545d88-fnfsb" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.515126 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73bf4d14-9234-4f3d-a7a1-282e3fc81909-service-ca-bundle\") pod \"router-default-ddc545d88-fnfsb\" (UID: \"73bf4d14-9234-4f3d-a7a1-282e3fc81909\") " pod="openshift-ingress/router-default-ddc545d88-fnfsb" Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.515163 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.515150017 +0000 UTC m=+40.720742464 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-b7796e6c-734d-4051-bc6f-80b44887ce39" (UniqueName: "kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.515282 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73bf4d14-9234-4f3d-a7a1-282e3fc81909-default-certificate\") pod \"router-default-ddc545d88-fnfsb\" (UID: \"73bf4d14-9234-4f3d-a7a1-282e3fc81909\") " pod="openshift-ingress/router-default-ddc545d88-fnfsb" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.515399 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btrp4\" (UniqueName: \"kubernetes.io/projected/8ad2af1d-571d-4352-a43a-8d1511797a50-kube-api-access-btrp4\") pod \"edgeauthzpolicyserver-5b4966b595-stp8v\" (UID: \"8ad2af1d-571d-4352-a43a-8d1511797a50\") " pod="edgenius/edgeauthzpolicyserver-5b4966b595-stp8v" Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.515830 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.515816626 +0000 UTC m=+40.721409173 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-c86e97d4-9d02-47ee-840f-6c3236b9bb20" (UniqueName: "kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2") pod "edgerouter-5d74457fcf-k7hlw" (UID: "deb6d14e-2b25-41bf-a1ab-8b111acd0e78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.515905 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.015893927 +0000 UTC m=+39.221486374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-478365f1-474d-43f9-91d6-8dccc0bc53ec" (UniqueName: "kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f") pod "edgevariablesubscription-6db6c446c5-vnbbm" (UID: "04b78a0a-d69e-4dad-817b-40e4b7b399d2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.516139 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.516124029 +0000 UTC m=+40.721716576 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-833494e0-e898-4273-bdf5-4407a1a16caf" (UniqueName: "kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d") pod "edgeauditeventservice-6689859d58-vjwz4" (UID: "29d572ed-b570-4b8f-85a0-2b43f8c5cb08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.516320 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.516303232 +0000 UTC m=+40.721895679 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-8343fecb-6ce6-4780-80cb-9684ff788e87" (UniqueName: "kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4") pod "edgeauthadminapi-649c48bb6b-r4ndx" (UID: "f2d63a65-c662-4d47-a073-0f1184f6ed0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.516512 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.516499834 +0000 UTC m=+40.722092281 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-76f00de9-a1b9-494b-ab95-1f55febf7413" (UniqueName: "kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.516689 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.516679536 +0000 UTC m=+39.722271983 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-801fdff1-bb7b-4860-95e8-f0d41ef257b2" (UniqueName: "kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4") pod "edgeplatformeventsubscription-5699bfd6bf-lhnls" (UID: "30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.516969 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.516953239 +0000 UTC m=+40.722545686 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-7ab8728c-5dd2-4404-a896-b0ae5d692ae6" (UniqueName: "kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6") pod "edgesubscriptionservice-6779669c5f-8tgbq" (UID: "218615a4-d28f-4014-822d-84c6af570fe2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.520285 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.520216879 +0000 UTC m=+40.725809326 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-cd188fb5-73ca-4f6d-9e09-1f83142f142f" (UniqueName: "kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c") pod "edgealarmsubscription-f49965fd-tdd6x" (UID: "a7b376b4-0e44-4940-9bcc-8c9ad42b02d9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.616747 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b9998c6-944e-44c0-8379-c7f47e615b06\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc\") pod \"edgetyperegistry-587d8f8d84-5w96j\" (UID: \"c073d676-39d2-4584-b032-06f164bb8202\") " pod="edgenius/edgetyperegistry-587d8f8d84-5w96j" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.616967 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5vx2\" (UniqueName: \"kubernetes.io/projected/c073d676-39d2-4584-b032-06f164bb8202-kube-api-access-w5vx2\") pod \"edgetyperegistry-587d8f8d84-5w96j\" (UID: \"c073d676-39d2-4584-b032-06f164bb8202\") " pod="edgenius/edgetyperegistry-587d8f8d84-5w96j" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.617004 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edgetyperegistry-secret\" (UniqueName: \"kubernetes.io/secret/c073d676-39d2-4584-b032-06f164bb8202-edgetyperegistry-secret\") pod \"edgetyperegistry-587d8f8d84-5w96j\" (UID: \"c073d676-39d2-4584-b032-06f164bb8202\") " pod="edgenius/edgetyperegistry-587d8f8d84-5w96j" Jan 17 14:38:57 edgenius microshift[2779]: kubelet I0117 14:38:57.617030 2779 reconciler.go:169] "Reconciler: start to sync state" Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.617688 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.117676872 +0000 UTC m=+39.323269419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-e1f9cacc-84f1-40f4-ab2b-f27c92d9e23d" (UniqueName: "kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7") pod "edgeauthzpolicyserver-5b4966b595-stp8v" (UID: "8ad2af1d-571d-4352-a43a-8d1511797a50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.617849 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73bf4d14-9234-4f3d-a7a1-282e3fc81909-service-ca-bundle podName:73bf4d14-9234-4f3d-a7a1-282e3fc81909 nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.117836774 +0000 UTC m=+39.323429221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/73bf4d14-9234-4f3d-a7a1-282e3fc81909-service-ca-bundle") pod "router-default-ddc545d88-fnfsb" (UID: "73bf4d14-9234-4f3d-a7a1-282e3fc81909") : configmap references non-existent config key: service-ca.crt Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.617919 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.617910375 +0000 UTC m=+40.823502822 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-7a2b2701-6804-4389-89bd-96ed13312f78" (UniqueName: "kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d") pod "edgeauthenticationserver-77667796cc-wz8px" (UID: "99254a2a-f130-4eaa-bae0-8f40af8082d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.618047 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.618039576 +0000 UTC m=+40.823632023 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-908083bb-d8d3-4ef4-90b9-e3de096e451d" (UniqueName: "kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.618306 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.618298179 +0000 UTC m=+40.823890626 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-a8fb267b-946e-4e11-a0ae-7e7801c4a529" (UniqueName: "kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.618385 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.61837458 +0000 UTC m=+40.823967027 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-53896aae-aac4-45d7-b18f-139128576f5f" (UniqueName: "kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746") pod "edgeinfomodel-6db75d77ff-cxgnw" (UID: "a583cdea-1a90-4331-ac02-3a01de3fb5b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.733169 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.733149984 +0000 UTC m=+40.938742431 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-2c24f91c-5968-49cb-a325-a9514087828b" (UniqueName: "kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.733479 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.733469288 +0000 UTC m=+40.939061835 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-7b5f487d-ae5a-4820-b0fe-d3149828d2a1" (UniqueName: "kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.735149 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.735135709 +0000 UTC m=+39.940728156 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-bfccbc83-78d0-46d3-bb90-639eb139f067" (UniqueName: "kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8") pod "edgeauthadminui-bbb77c5f7-b5jrg" (UID: "b9ab877e-4277-412f-9d76-f3833c0807fc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.735516 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.735505213 +0000 UTC m=+40.941097660 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-89281368-8fe8-4a9b-be69-3254d7e153e1" (UniqueName: "kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.735811 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.735801417 +0000 UTC m=+39.941393864 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-d32a6692-a6b3-4263-a0af-0c4e0897ea09" (UniqueName: "kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39") pod "edge-broker-7fd9b99b6c-ncxvq" (UID: "4af511ec-802a-47ed-8715-fb0f87f76549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.736104 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.23608182 +0000 UTC m=+39.441674267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-3b9998c6-944e-44c0-8379-c7f47e615b06" (UniqueName: "kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc") pod "edgetyperegistry-587d8f8d84-5w96j" (UID: "c073d676-39d2-4584-b032-06f164bb8202") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.832814 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.832801203 +0000 UTC m=+41.038393650 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-6549e31c-5301-4828-8ad7-2312cbc7a4d4" (UniqueName: "kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:57 edgenius microshift[2779]: kubelet E0117 14:38:57.832854 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:58.832849204 +0000 UTC m=+40.038441651 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-715180ea-d66b-4c0e-b754-18397f45e045" (UniqueName: "kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9") pod "edgefilestorage-fd95f9fcb-2l447" (UID: "49ac2d99-3c8d-4886-a60e-898d1dfeb9bd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.035282 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:00.03527178 +0000 UTC m=+41.240864227 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-1b504a84-7ebc-40cc-b967-923fe4e170c5" (UniqueName: "kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7") pod "edgedeviceapiorchestrator-ff895dcb9-hwzvr" (UID: "fdc0b70a-0540-41c9-b14e-29e5d04d9084") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.035310 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.035304481 +0000 UTC m=+40.240896928 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-478365f1-474d-43f9-91d6-8dccc0bc53ec" (UniqueName: "kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f") pod "edgevariablesubscription-6db6c446c5-vnbbm" (UID: "04b78a0a-d69e-4dad-817b-40e4b7b399d2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.061120 2779 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-545bd5d7d8-lrscs_cert-manager_2d1fe1e9-5bbb-4c06-be50-091f0795f91d_0(0a9b1d4f6444287c653d935b0717a8805ff8e5a125c1b53f949b843ed6f45950): error adding pod cert-manager_cert-manager-webhook-545bd5d7d8-lrscs to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.061177 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-545bd5d7d8-lrscs_cert-manager_2d1fe1e9-5bbb-4c06-be50-091f0795f91d_0(0a9b1d4f6444287c653d935b0717a8805ff8e5a125c1b53f949b843ed6f45950): error adding pod cert-manager_cert-manager-webhook-545bd5d7d8-lrscs to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="cert-manager/cert-manager-webhook-545bd5d7d8-lrscs" Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.061200 2779 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-545bd5d7d8-lrscs_cert-manager_2d1fe1e9-5bbb-4c06-be50-091f0795f91d_0(0a9b1d4f6444287c653d935b0717a8805ff8e5a125c1b53f949b843ed6f45950): error adding pod cert-manager_cert-manager-webhook-545bd5d7d8-lrscs to CNI network \"ovn-kubernetes\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (add): failed to send CNI request: Post \"http://dummy/\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory" pod="cert-manager/cert-manager-webhook-545bd5d7d8-lrscs" Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.061237 2779 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-545bd5d7d8-lrscs_cert-manager(2d1fe1e9-5bbb-4c06-be50-091f0795f91d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-545bd5d7d8-lrscs_cert-manager(2d1fe1e9-5bbb-4c06-be50-091f0795f91d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-545bd5d7d8-lrscs_cert-manager_2d1fe1e9-5bbb-4c06-be50-091f0795f91d_0(0a9b1d4f6444287c653d935b0717a8805ff8e5a125c1b53f949b843ed6f45950): error adding pod cert-manager_cert-manager-webhook-545bd5d7d8-lrscs to CNI network \\\"ovn-kubernetes\\\": plugin type=\\\"ovn-k8s-cni-overlay\\\" name=\\\"ovn-kubernetes\\\" failed (add): failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory\"" pod="cert-manager/cert-manager-webhook-545bd5d7d8-lrscs" podUID=2d1fe1e9-5bbb-4c06-be50-091f0795f91d Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.136205 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:00.136191515 +0000 UTC m=+41.341783962 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-acb2d71c-7ea5-4a37-a32c-d15e95a1991a" (UniqueName: "kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2") pod "edgeapigateway-6669ccbd5d-jffch" (UID: "83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.136254 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.136246816 +0000 UTC m=+40.341839263 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-e1f9cacc-84f1-40f4-ab2b-f27c92d9e23d" (UniqueName: "kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7") pod "edgeauthzpolicyserver-5b4966b595-stp8v" (UID: "8ad2af1d-571d-4352-a43a-8d1511797a50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.136550 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73bf4d14-9234-4f3d-a7a1-282e3fc81909-service-ca-bundle podName:73bf4d14-9234-4f3d-a7a1-282e3fc81909 nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.136542519 +0000 UTC m=+40.342134966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/73bf4d14-9234-4f3d-a7a1-282e3fc81909-service-ca-bundle") pod "router-default-ddc545d88-fnfsb" (UID: "73bf4d14-9234-4f3d-a7a1-282e3fc81909") : configmap references non-existent config key: service-ca.crt Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.236759 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:00.236748445 +0000 UTC m=+41.442340892 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-f647bd64-9a4d-43d5-adde-a7181f8ba0c1" (UniqueName: "kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751") pod "edgeeventsubscription-7dd9f64c67-rf99m" (UID: "c35cebcb-87fe-4857-b3e5-312a2ee55902") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.236922 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:38:59.236914147 +0000 UTC m=+40.442506594 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-3b9998c6-944e-44c0-8379-c7f47e615b06" (UniqueName: "kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc") pod "edgetyperegistry-587d8f8d84-5w96j" (UID: "c073d676-39d2-4584-b032-06f164bb8202") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.338257 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:00.338245387 +0000 UTC m=+41.543837834 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-c1a2031d-e768-48b5-8951-eae888ae8f91" (UniqueName: "kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac") pod "edgemethodinvocation-7d5cb5d865-mdqtp" (UID: "799c5e55-8569-4df9-ad89-16e0d46bb5b7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.540317 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:00.540307659 +0000 UTC m=+41.745900106 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-801fdff1-bb7b-4860-95e8-f0d41ef257b2" (UniqueName: "kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4") pod "edgeplatformeventsubscription-5699bfd6bf-lhnls" (UID: "30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.742138 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:00.742126728 +0000 UTC m=+41.947719275 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-bfccbc83-78d0-46d3-bb90-639eb139f067" (UniqueName: "kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8") pod "edgeauthadminui-bbb77c5f7-b5jrg" (UID: "b9ab877e-4277-412f-9d76-f3833c0807fc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.742170 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:00.742165028 +0000 UTC m=+41.947757475 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-d32a6692-a6b3-4263-a0af-0c4e0897ea09" (UniqueName: "kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39") pod "edge-broker-7fd9b99b6c-ncxvq" (UID: "4af511ec-802a-47ed-8715-fb0f87f76549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:58 edgenius microshift[2779]: kubelet E0117 14:38:58.844234 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:00.844221477 +0000 UTC m=+42.049813924 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-715180ea-d66b-4c0e-b754-18397f45e045" (UniqueName: "kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9") pod "edgefilestorage-fd95f9fcb-2l447" (UID: "49ac2d99-3c8d-4886-a60e-898d1dfeb9bd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.046654 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:01.046633253 +0000 UTC m=+42.252225800 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-478365f1-474d-43f9-91d6-8dccc0bc53ec" (UniqueName: "kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f") pod "edgevariablesubscription-6db6c446c5-vnbbm" (UID: "04b78a0a-d69e-4dad-817b-40e4b7b399d2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.147489 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73bf4d14-9234-4f3d-a7a1-282e3fc81909-service-ca-bundle podName:73bf4d14-9234-4f3d-a7a1-282e3fc81909 nodeName:}" failed. No retries permitted until 2023-01-17 14:39:01.147471487 +0000 UTC m=+42.353063934 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/73bf4d14-9234-4f3d-a7a1-282e3fc81909-service-ca-bundle") pod "router-default-ddc545d88-fnfsb" (UID: "73bf4d14-9234-4f3d-a7a1-282e3fc81909") : configmap references non-existent config key: service-ca.crt Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.147722 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:01.147708489 +0000 UTC m=+42.353300936 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-e1f9cacc-84f1-40f4-ab2b-f27c92d9e23d" (UniqueName: "kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7") pod "edgeauthzpolicyserver-5b4966b595-stp8v" (UID: "8ad2af1d-571d-4352-a43a-8d1511797a50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.248808 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:01.248798226 +0000 UTC m=+42.454390673 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-3b9998c6-944e-44c0-8379-c7f47e615b06" (UniqueName: "kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc") pod "edgetyperegistry-587d8f8d84-5w96j" (UID: "c073d676-39d2-4584-b032-06f164bb8202") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.450611 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.450592195 +0000 UTC m=+44.656184642 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-aa8c4ab6-3c77-4823-b037-ee99ab79a7a5" (UniqueName: "kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a") pod "edgeauthzpolicystore-5889cf9977-sljcq" (UID: "5ba3e342-4e77-4728-8617-6cb001d446b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.551940 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.551923534 +0000 UTC m=+44.757516081 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-c86e97d4-9d02-47ee-840f-6c3236b9bb20" (UniqueName: "kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2") pod "edgerouter-5d74457fcf-k7hlw" (UID: "deb6d14e-2b25-41bf-a1ab-8b111acd0e78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.552012 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.552004335 +0000 UTC m=+44.757596882 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-cd188fb5-73ca-4f6d-9e09-1f83142f142f" (UniqueName: "kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c") pod "edgealarmsubscription-f49965fd-tdd6x" (UID: "a7b376b4-0e44-4940-9bcc-8c9ad42b02d9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.552057 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.552051536 +0000 UTC m=+44.757644083 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-7ab8728c-5dd2-4404-a896-b0ae5d692ae6" (UniqueName: "kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6") pod "edgesubscriptionservice-6779669c5f-8tgbq" (UID: "218615a4-d28f-4014-822d-84c6af570fe2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.552236 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.552223938 +0000 UTC m=+44.757816385 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-76f00de9-a1b9-494b-ab95-1f55febf7413" (UniqueName: "kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.552421 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.55241554 +0000 UTC m=+44.758007987 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-833494e0-e898-4273-bdf5-4407a1a16caf" (UniqueName: "kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d") pod "edgeauditeventservice-6689859d58-vjwz4" (UID: "29d572ed-b570-4b8f-85a0-2b43f8c5cb08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.552448 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.552441841 +0000 UTC m=+44.758034288 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-8343fecb-6ce6-4780-80cb-9684ff788e87" (UniqueName: "kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4") pod "edgeauthadminapi-649c48bb6b-r4ndx" (UID: "f2d63a65-c662-4d47-a073-0f1184f6ed0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.552678 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.552671744 +0000 UTC m=+44.758264291 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-b7796e6c-734d-4051-bc6f-80b44887ce39" (UniqueName: "kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.652644 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.652634066 +0000 UTC m=+44.858226513 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-7a2b2701-6804-4389-89bd-96ed13312f78" (UniqueName: "kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d") pod "edgeauthenticationserver-77667796cc-wz8px" (UID: "99254a2a-f130-4eaa-bae0-8f40af8082d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.652690 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.652682567 +0000 UTC m=+44.858275014 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-a8fb267b-946e-4e11-a0ae-7e7801c4a529" (UniqueName: "kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.652705 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.652698467 +0000 UTC m=+44.858290914 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-53896aae-aac4-45d7-b18f-139128576f5f" (UniqueName: "kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746") pod "edgeinfomodel-6db75d77ff-cxgnw" (UID: "a583cdea-1a90-4331-ac02-3a01de3fb5b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.652720 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.652716267 +0000 UTC m=+44.858308714 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-908083bb-d8d3-4ef4-90b9-e3de096e451d" (UniqueName: "kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.753364 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.753351999 +0000 UTC m=+44.958944446 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-89281368-8fe8-4a9b-be69-3254d7e153e1" (UniqueName: "kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.753449 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.7534365 +0000 UTC m=+44.959028947 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-7b5f487d-ae5a-4820-b0fe-d3149828d2a1" (UniqueName: "kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.753479 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.7534701 +0000 UTC m=+44.959062547 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-2c24f91c-5968-49cb-a325-a9514087828b" (UniqueName: "kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:38:59 edgenius microshift[2779]: kubelet E0117 14:38:59.854066 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:03.854054431 +0000 UTC m=+45.059646878 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-6549e31c-5301-4828-8ad7-2312cbc7a4d4" (UniqueName: "kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:00 edgenius microshift[2779]: kubelet E0117 14:39:00.057074 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:04.057059914 +0000 UTC m=+45.262652361 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-1b504a84-7ebc-40cc-b967-923fe4e170c5" (UniqueName: "kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7") pod "edgedeviceapiorchestrator-ff895dcb9-hwzvr" (UID: "fdc0b70a-0540-41c9-b14e-29e5d04d9084") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:00 edgenius microshift[2779]: kubelet E0117 14:39:00.157931 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:04.157919448 +0000 UTC m=+45.363511895 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-acb2d71c-7ea5-4a37-a32c-d15e95a1991a" (UniqueName: "kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2") pod "edgeapigateway-6669ccbd5d-jffch" (UID: "83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:00 edgenius microshift[2779]: kubelet E0117 14:39:00.259017 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:04.259006485 +0000 UTC m=+45.464598932 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-f647bd64-9a4d-43d5-adde-a7181f8ba0c1" (UniqueName: "kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751") pod "edgeeventsubscription-7dd9f64c67-rf99m" (UID: "c35cebcb-87fe-4857-b3e5-312a2ee55902") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:00 edgenius microshift[2779]: kubelet E0117 14:39:00.359641 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:04.359608615 +0000 UTC m=+45.565201062 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-c1a2031d-e768-48b5-8951-eae888ae8f91" (UniqueName: "kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac") pod "edgemethodinvocation-7d5cb5d865-mdqtp" (UID: "799c5e55-8569-4df9-ad89-16e0d46bb5b7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:00 edgenius microshift[2779]: kubelet E0117 14:39:00.560764 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:04.560753376 +0000 UTC m=+45.766345823 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-801fdff1-bb7b-4860-95e8-f0d41ef257b2" (UniqueName: "kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4") pod "edgeplatformeventsubscription-5699bfd6bf-lhnls" (UID: "30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:00 edgenius microshift[2779]: kubelet E0117 14:39:00.763598 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:04.763585257 +0000 UTC m=+45.969177704 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-d32a6692-a6b3-4263-a0af-0c4e0897ea09" (UniqueName: "kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39") pod "edge-broker-7fd9b99b6c-ncxvq" (UID: "4af511ec-802a-47ed-8715-fb0f87f76549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:00 edgenius microshift[2779]: kubelet E0117 14:39:00.764160 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:04.764152064 +0000 UTC m=+45.969744511 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-bfccbc83-78d0-46d3-bb90-639eb139f067" (UniqueName: "kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8") pod "edgeauthadminui-bbb77c5f7-b5jrg" (UID: "b9ab877e-4277-412f-9d76-f3833c0807fc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:00 edgenius microshift[2779]: kubelet E0117 14:39:00.864645 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:04.864634294 +0000 UTC m=+46.070226741 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-715180ea-d66b-4c0e-b754-18397f45e045" (UniqueName: "kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9") pod "edgefilestorage-fd95f9fcb-2l447" (UID: "49ac2d99-3c8d-4886-a60e-898d1dfeb9bd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:00 edgenius microshift[2779]: kube-apiserver W0117 14:39:00.949259 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:39:00 edgenius microshift[2779]: kube-apiserver E0117 14:39:00.949311 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:39:01 edgenius microshift[2779]: kubelet E0117 14:39:01.067874 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:05.06786168 +0000 UTC m=+46.273454227 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-478365f1-474d-43f9-91d6-8dccc0bc53ec" (UniqueName: "kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f") pod "edgevariablesubscription-6db6c446c5-vnbbm" (UID: "04b78a0a-d69e-4dad-817b-40e4b7b399d2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:01 edgenius microshift[2779]: kubelet I0117 14:39:01.138802 2779 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Jan 17 14:39:01 edgenius microshift[2779]: kubelet E0117 14:39:01.168840 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73bf4d14-9234-4f3d-a7a1-282e3fc81909-service-ca-bundle podName:73bf4d14-9234-4f3d-a7a1-282e3fc81909 nodeName:}" failed. No retries permitted until 2023-01-17 14:39:05.168816515 +0000 UTC m=+46.374408962 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/73bf4d14-9234-4f3d-a7a1-282e3fc81909-service-ca-bundle") pod "router-default-ddc545d88-fnfsb" (UID: "73bf4d14-9234-4f3d-a7a1-282e3fc81909") : configmap references non-existent config key: service-ca.crt Jan 17 14:39:01 edgenius microshift[2779]: kubelet E0117 14:39:01.169508 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:05.169493023 +0000 UTC m=+46.375085470 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-e1f9cacc-84f1-40f4-ab2b-f27c92d9e23d" (UniqueName: "kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7") pod "edgeauthzpolicyserver-5b4966b595-stp8v" (UID: "8ad2af1d-571d-4352-a43a-8d1511797a50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:01 edgenius microshift[2779]: kubelet E0117 14:39:01.270341 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:05.270327857 +0000 UTC m=+46.475920304 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-3b9998c6-944e-44c0-8379-c7f47e615b06" (UniqueName: "kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc") pod "edgetyperegistry-587d8f8d84-5w96j" (UID: "c073d676-39d2-4584-b032-06f164bb8202") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.493813 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.493801458 +0000 UTC m=+52.699393905 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-aa8c4ab6-3c77-4823-b037-ee99ab79a7a5" (UniqueName: "kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a") pod "edgeauthzpolicystore-5889cf9977-sljcq" (UID: "5ba3e342-4e77-4728-8617-6cb001d446b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.594364 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.594353688 +0000 UTC m=+52.799946135 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-c86e97d4-9d02-47ee-840f-6c3236b9bb20" (UniqueName: "kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2") pod "edgerouter-5d74457fcf-k7hlw" (UID: "deb6d14e-2b25-41bf-a1ab-8b111acd0e78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.594417 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.594411988 +0000 UTC m=+52.800004435 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-cd188fb5-73ca-4f6d-9e09-1f83142f142f" (UniqueName: "kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c") pod "edgealarmsubscription-f49965fd-tdd6x" (UID: "a7b376b4-0e44-4940-9bcc-8c9ad42b02d9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.594429 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.594425889 +0000 UTC m=+52.800018336 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-833494e0-e898-4273-bdf5-4407a1a16caf" (UniqueName: "kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d") pod "edgeauditeventservice-6689859d58-vjwz4" (UID: "29d572ed-b570-4b8f-85a0-2b43f8c5cb08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.594463 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.594459089 +0000 UTC m=+52.800051536 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-7ab8728c-5dd2-4404-a896-b0ae5d692ae6" (UniqueName: "kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6") pod "edgesubscriptionservice-6779669c5f-8tgbq" (UID: "218615a4-d28f-4014-822d-84c6af570fe2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.594661 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.594655191 +0000 UTC m=+52.800247638 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-8343fecb-6ce6-4780-80cb-9684ff788e87" (UniqueName: "kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4") pod "edgeauthadminapi-649c48bb6b-r4ndx" (UID: "f2d63a65-c662-4d47-a073-0f1184f6ed0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.594847 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.594816193 +0000 UTC m=+52.800408640 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-b7796e6c-734d-4051-bc6f-80b44887ce39" (UniqueName: "kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.594900 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.594894894 +0000 UTC m=+52.800487341 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-76f00de9-a1b9-494b-ab95-1f55febf7413" (UniqueName: "kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.695621 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.695609526 +0000 UTC m=+52.901201973 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-7a2b2701-6804-4389-89bd-96ed13312f78" (UniqueName: "kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d") pod "edgeauthenticationserver-77667796cc-wz8px" (UID: "99254a2a-f130-4eaa-bae0-8f40af8082d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.695676 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.695667327 +0000 UTC m=+52.901259874 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-908083bb-d8d3-4ef4-90b9-e3de096e451d" (UniqueName: "kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.695690 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.695686327 +0000 UTC m=+52.901278774 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-a8fb267b-946e-4e11-a0ae-7e7801c4a529" (UniqueName: "kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.695701 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.695697127 +0000 UTC m=+52.901289674 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-53896aae-aac4-45d7-b18f-139128576f5f" (UniqueName: "kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746") pod "edgeinfomodel-6db75d77ff-cxgnw" (UID: "a583cdea-1a90-4331-ac02-3a01de3fb5b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.796700 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.796677763 +0000 UTC m=+53.002270310 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-7b5f487d-ae5a-4820-b0fe-d3149828d2a1" (UniqueName: "kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.796906 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.796889065 +0000 UTC m=+53.002481512 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-2c24f91c-5968-49cb-a325-a9514087828b" (UniqueName: "kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.797012 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.797004867 +0000 UTC m=+53.002597314 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-89281368-8fe8-4a9b-be69-3254d7e153e1" (UniqueName: "kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:03 edgenius microshift[2779]: kubelet E0117 14:39:03.896840 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:11.896823288 +0000 UTC m=+53.102415735 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-6549e31c-5301-4828-8ad7-2312cbc7a4d4" (UniqueName: "kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:04 edgenius microshift[2779]: kubelet E0117 14:39:04.098363 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:12.098316053 +0000 UTC m=+53.303908500 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-1b504a84-7ebc-40cc-b967-923fe4e170c5" (UniqueName: "kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7") pod "edgedeviceapiorchestrator-ff895dcb9-hwzvr" (UID: "fdc0b70a-0540-41c9-b14e-29e5d04d9084") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:04 edgenius microshift[2779]: kubelet E0117 14:39:04.199690 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:12.199677293 +0000 UTC m=+53.405269740 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-acb2d71c-7ea5-4a37-a32c-d15e95a1991a" (UniqueName: "kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2") pod "edgeapigateway-6669ccbd5d-jffch" (UID: "83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:04 edgenius microshift[2779]: kubelet E0117 14:39:04.300451 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:12.300437226 +0000 UTC m=+53.506029673 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-f647bd64-9a4d-43d5-adde-a7181f8ba0c1" (UniqueName: "kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751") pod "edgeeventsubscription-7dd9f64c67-rf99m" (UID: "c35cebcb-87fe-4857-b3e5-312a2ee55902") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:04 edgenius microshift[2779]: kubelet E0117 14:39:04.401517 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:12.401503762 +0000 UTC m=+53.607096309 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-c1a2031d-e768-48b5-8951-eae888ae8f91" (UniqueName: "kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac") pod "edgemethodinvocation-7d5cb5d865-mdqtp" (UID: "799c5e55-8569-4df9-ad89-16e0d46bb5b7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:04 edgenius microshift[2779]: kubelet E0117 14:39:04.603814 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:12.603802837 +0000 UTC m=+53.809395284 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-801fdff1-bb7b-4860-95e8-f0d41ef257b2" (UniqueName: "kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4") pod "edgeplatformeventsubscription-5699bfd6bf-lhnls" (UID: "30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:04 edgenius microshift[2779]: kubelet E0117 14:39:04.806633 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:12.806620418 +0000 UTC m=+54.012212865 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-bfccbc83-78d0-46d3-bb90-639eb139f067" (UniqueName: "kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8") pod "edgeauthadminui-bbb77c5f7-b5jrg" (UID: "b9ab877e-4277-412f-9d76-f3833c0807fc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:04 edgenius microshift[2779]: kubelet E0117 14:39:04.806694 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:12.806675719 +0000 UTC m=+54.012268266 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-d32a6692-a6b3-4263-a0af-0c4e0897ea09" (UniqueName: "kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39") pod "edge-broker-7fd9b99b6c-ncxvq" (UID: "4af511ec-802a-47ed-8715-fb0f87f76549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:04 edgenius microshift[2779]: kubelet E0117 14:39:04.908068 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:12.908045059 +0000 UTC m=+54.113637606 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-715180ea-d66b-4c0e-b754-18397f45e045" (UniqueName: "kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9") pod "edgefilestorage-fd95f9fcb-2l447" (UID: "49ac2d99-3c8d-4886-a60e-898d1dfeb9bd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:05 edgenius microshift[2779]: kube-controller-manager E0117 14:39:05.082315 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:39:05 edgenius microshift[2779]: kube-controller-manager I0117 14:39:05.082474 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:39:05 edgenius microshift[2779]: kubelet E0117 14:39:05.109453 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:13.109440523 +0000 UTC m=+54.315032970 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-478365f1-474d-43f9-91d6-8dccc0bc53ec" (UniqueName: "kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f") pod "edgevariablesubscription-6db6c446c5-vnbbm" (UID: "04b78a0a-d69e-4dad-817b-40e4b7b399d2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:05 edgenius microshift[2779]: kubelet E0117 14:39:05.210628 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73bf4d14-9234-4f3d-a7a1-282e3fc81909-service-ca-bundle podName:73bf4d14-9234-4f3d-a7a1-282e3fc81909 nodeName:}" failed. No retries permitted until 2023-01-17 14:39:13.21061376 +0000 UTC m=+54.416206207 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/73bf4d14-9234-4f3d-a7a1-282e3fc81909-service-ca-bundle") pod "router-default-ddc545d88-fnfsb" (UID: "73bf4d14-9234-4f3d-a7a1-282e3fc81909") : configmap references non-existent config key: service-ca.crt Jan 17 14:39:05 edgenius microshift[2779]: kubelet E0117 14:39:05.211131 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:13.211119766 +0000 UTC m=+54.416712213 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-e1f9cacc-84f1-40f4-ab2b-f27c92d9e23d" (UniqueName: "kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7") pod "edgeauthzpolicyserver-5b4966b595-stp8v" (UID: "8ad2af1d-571d-4352-a43a-8d1511797a50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:05 edgenius microshift[2779]: kubelet E0117 14:39:05.312560 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:13.312547007 +0000 UTC m=+54.518139454 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-3b9998c6-944e-44c0-8379-c7f47e615b06" (UniqueName: "kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc") pod "edgetyperegistry-587d8f8d84-5w96j" (UID: "c073d676-39d2-4584-b032-06f164bb8202") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:08 edgenius microshift[2779]: kubelet W0117 14:39:08.651413 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda17ea07b_2966_4a2e_8974_0eb5533ce070.slice/crio-1f18ed2ad249bdeed734ff8e9c89fbdafe46a7eed610f8b4b293e51b634ea168.scope WatchSource:0}: Error finding container 1f18ed2ad249bdeed734ff8e9c89fbdafe46a7eed610f8b4b293e51b634ea168: Status 404 returned error can't find the container with id 1f18ed2ad249bdeed734ff8e9c89fbdafe46a7eed610f8b4b293e51b634ea168 Jan 17 14:39:08 edgenius microshift[2779]: kubelet W0117 14:39:08.725135 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb86ab6ef_df28_455c_9373_4a4832000498.slice/crio-8640b7c0e5d7dc8f91c6527dd2a54fefdbfe492032fa7a91d77f3c1aeb5ecb0c.scope WatchSource:0}: Error finding container 8640b7c0e5d7dc8f91c6527dd2a54fefdbfe492032fa7a91d77f3c1aeb5ecb0c: Status 404 returned error can't find the container with id 8640b7c0e5d7dc8f91c6527dd2a54fefdbfe492032fa7a91d77f3c1aeb5ecb0c Jan 17 14:39:08 edgenius microshift[2779]: kubelet W0117 14:39:08.750305 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63af6fe5_f768_4f46_b1c2_c25945b2d5a4.slice/crio-5ac69e1dc4da7209b7f2ee930e78e1c81678302bef489526c467eeb911146b6c.scope WatchSource:0}: Error finding container 5ac69e1dc4da7209b7f2ee930e78e1c81678302bef489526c467eeb911146b6c: Status 404 returned error can't find the container with id 5ac69e1dc4da7209b7f2ee930e78e1c81678302bef489526c467eeb911146b6c Jan 17 14:39:09 edgenius microshift[2779]: kubelet W0117 14:39:09.546420 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d981a1f_fbb2_471a_aa8c_4fb30c289628.slice/crio-cd2e4701eb17fa489eedd73ec8b042241ecb29ab7c1c94f630c90ae5b2116c84.scope WatchSource:0}: Error finding container cd2e4701eb17fa489eedd73ec8b042241ecb29ab7c1c94f630c90ae5b2116c84: Status 404 returned error can't find the container with id cd2e4701eb17fa489eedd73ec8b042241ecb29ab7c1c94f630c90ae5b2116c84 Jan 17 14:39:09 edgenius microshift[2779]: kube-apiserver W0117 14:39:09.672028 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:39:09 edgenius microshift[2779]: kube-apiserver E0117 14:39:09.672227 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:39:10 edgenius microshift[2779]: kubelet W0117 14:39:10.604061 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06feb4c2_9a44_433d_b2a1_ab7e76cea2eb.slice/crio-8c19ef04b13563a96fe0d66caed5f577315cebf95ee32fca644537f830f8d90c.scope WatchSource:0}: Error finding container 8c19ef04b13563a96fe0d66caed5f577315cebf95ee32fca644537f830f8d90c: Status 404 returned error can't find the container with id 8c19ef04b13563a96fe0d66caed5f577315cebf95ee32fca644537f830f8d90c Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.593325 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.593315643 +0000 UTC m=+68.798908190 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-aa8c4ab6-3c77-4823-b037-ee99ab79a7a5" (UniqueName: "kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a") pod "edgeauthzpolicystore-5889cf9977-sljcq" (UID: "5ba3e342-4e77-4728-8617-6cb001d446b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.694651 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.694641183 +0000 UTC m=+68.900233630 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-7ab8728c-5dd2-4404-a896-b0ae5d692ae6" (UniqueName: "kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6") pod "edgesubscriptionservice-6779669c5f-8tgbq" (UID: "218615a4-d28f-4014-822d-84c6af570fe2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.694682 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.694675583 +0000 UTC m=+68.900268030 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-c86e97d4-9d02-47ee-840f-6c3236b9bb20" (UniqueName: "kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2") pod "edgerouter-5d74457fcf-k7hlw" (UID: "deb6d14e-2b25-41bf-a1ab-8b111acd0e78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.695038 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.695027688 +0000 UTC m=+68.900620135 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-8343fecb-6ce6-4780-80cb-9684ff788e87" (UniqueName: "kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4") pod "edgeauthadminapi-649c48bb6b-r4ndx" (UID: "f2d63a65-c662-4d47-a073-0f1184f6ed0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.695602 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.695595695 +0000 UTC m=+68.901188142 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-b7796e6c-734d-4051-bc6f-80b44887ce39" (UniqueName: "kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.696148 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.696138701 +0000 UTC m=+68.901731148 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-833494e0-e898-4273-bdf5-4407a1a16caf" (UniqueName: "kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d") pod "edgeauditeventservice-6689859d58-vjwz4" (UID: "29d572ed-b570-4b8f-85a0-2b43f8c5cb08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.696768 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.696757309 +0000 UTC m=+68.902349756 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-76f00de9-a1b9-494b-ab95-1f55febf7413" (UniqueName: "kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.698210 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.698200027 +0000 UTC m=+68.903792474 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-908083bb-d8d3-4ef4-90b9-e3de096e451d" (UniqueName: "kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6") pod "edgeconfigurationservice-db54f49b9-89kfz" (UID: "4bf035da-7e84-4e00-8a19-818452f8f30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.698526 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.69851953 +0000 UTC m=+68.904111977 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-cd188fb5-73ca-4f6d-9e09-1f83142f142f" (UniqueName: "kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c") pod "edgealarmsubscription-f49965fd-tdd6x" (UID: "a7b376b4-0e44-4940-9bcc-8c9ad42b02d9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.698846 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.698839934 +0000 UTC m=+68.904432381 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-53896aae-aac4-45d7-b18f-139128576f5f" (UniqueName: "kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746") pod "edgeinfomodel-6db75d77ff-cxgnw" (UID: "a583cdea-1a90-4331-ac02-3a01de3fb5b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet W0117 14:39:11.707333 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59eb2370_7941_4982_adde_19ea693c7bf0.slice/crio-f3316435f6e401c1e2e2e0d64aba7a1a068c8d3786813e269c23d792a057260c.scope WatchSource:0}: Error finding container f3316435f6e401c1e2e2e0d64aba7a1a068c8d3786813e269c23d792a057260c: Status 404 returned error can't find the container with id f3316435f6e401c1e2e2e0d64aba7a1a068c8d3786813e269c23d792a057260c Jan 17 14:39:11 edgenius microshift[2779]: kubelet W0117 14:39:11.763028 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb180b903_2cce_4884_b171_abb2042fb354.slice/crio-b485b06b0e8cf7f9c2e1a055e4fa0c3e0af4bbb0a1414be884093a7bf228a543.scope WatchSource:0}: Error finding container b485b06b0e8cf7f9c2e1a055e4fa0c3e0af4bbb0a1414be884093a7bf228a543: Status 404 returned error can't find the container with id b485b06b0e8cf7f9c2e1a055e4fa0c3e0af4bbb0a1414be884093a7bf228a543 Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.799237 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.799225162 +0000 UTC m=+69.004817609 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-7b5f487d-ae5a-4820-b0fe-d3149828d2a1" (UniqueName: "kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.800872 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.800848382 +0000 UTC m=+69.006440829 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-7a2b2701-6804-4389-89bd-96ed13312f78" (UniqueName: "kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d") pod "edgeauthenticationserver-77667796cc-wz8px" (UID: "99254a2a-f130-4eaa-bae0-8f40af8082d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.802407 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.802375101 +0000 UTC m=+69.007967648 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-a8fb267b-946e-4e11-a0ae-7e7801c4a529" (UniqueName: "kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b") pod "edgetyperegistrydb-56b49789c7-xrvhf" (UID: "daea4ebe-2c14-4dc4-83de-a4c37d005b23") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.804610 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.804364425 +0000 UTC m=+69.009956872 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-2c24f91c-5968-49cb-a325-a9514087828b" (UniqueName: "kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.804671 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.804656529 +0000 UTC m=+69.010249076 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-89281368-8fe8-4a9b-be69-3254d7e153e1" (UniqueName: "kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b") pod "edgeauthdb-55f84588f-n9mmq" (UID: "bfb4052a-0189-4b30-8f47-80d4485b5ebf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:11 edgenius microshift[2779]: kubelet E0117 14:39:11.901159 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:27.901138109 +0000 UTC m=+69.106730556 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-6549e31c-5301-4828-8ad7-2312cbc7a4d4" (UniqueName: "kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006") pod "edgeinfomodeldb-5fbd9887d8-hdm8j" (UID: "20676429-968e-49f3-81fe-4cf06a875c4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:12 edgenius microshift[2779]: kubelet E0117 14:39:12.103214 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:28.103201181 +0000 UTC m=+69.308793628 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-1b504a84-7ebc-40cc-b967-923fe4e170c5" (UniqueName: "kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7") pod "edgedeviceapiorchestrator-ff895dcb9-hwzvr" (UID: "fdc0b70a-0540-41c9-b14e-29e5d04d9084") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:12 edgenius microshift[2779]: kubelet E0117 14:39:12.203885 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:28.203869813 +0000 UTC m=+69.409462260 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-acb2d71c-7ea5-4a37-a32c-d15e95a1991a" (UniqueName: "kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2") pod "edgeapigateway-6669ccbd5d-jffch" (UID: "83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:12 edgenius microshift[2779]: kubelet E0117 14:39:12.317706 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:28.317683305 +0000 UTC m=+69.523275852 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-f647bd64-9a4d-43d5-adde-a7181f8ba0c1" (UniqueName: "kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751") pod "edgeeventsubscription-7dd9f64c67-rf99m" (UID: "c35cebcb-87fe-4857-b3e5-312a2ee55902") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:12 edgenius microshift[2779]: kubelet E0117 14:39:12.405831 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:28.405818183 +0000 UTC m=+69.611410630 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-c1a2031d-e768-48b5-8951-eae888ae8f91" (UniqueName: "kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac") pod "edgemethodinvocation-7d5cb5d865-mdqtp" (UID: "799c5e55-8569-4df9-ad89-16e0d46bb5b7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:12 edgenius microshift[2779]: kubelet E0117 14:39:12.607510 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:28.60749975 +0000 UTC m=+69.813092297 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-801fdff1-bb7b-4860-95e8-f0d41ef257b2" (UniqueName: "kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4") pod "edgeplatformeventsubscription-5699bfd6bf-lhnls" (UID: "30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:12 edgenius microshift[2779]: kubelet W0117 14:39:12.645762 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d1fe1e9_5bbb_4c06_be50_091f0795f91d.slice/crio-653ecc6d31a1c75677d9dc8e5b58c305f02b603e42944c006ca7558acf4ccf82.scope WatchSource:0}: Error finding container 653ecc6d31a1c75677d9dc8e5b58c305f02b603e42944c006ca7558acf4ccf82: Status 404 returned error can't find the container with id 653ecc6d31a1c75677d9dc8e5b58c305f02b603e42944c006ca7558acf4ccf82 Jan 17 14:39:12 edgenius microshift[2779]: kubelet E0117 14:39:12.809202 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:28.809189818 +0000 UTC m=+70.014782265 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-bfccbc83-78d0-46d3-bb90-639eb139f067" (UniqueName: "kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8") pod "edgeauthadminui-bbb77c5f7-b5jrg" (UID: "b9ab877e-4277-412f-9d76-f3833c0807fc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:12 edgenius microshift[2779]: kubelet E0117 14:39:12.809245 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39 podName: nodeName:}" failed. No retries permitted until 2023-01-17 14:39:28.809237218 +0000 UTC m=+70.014829665 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-d32a6692-a6b3-4263-a0af-0c4e0897ea09" (UniqueName: "kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39") pod "edge-broker-7fd9b99b6c-ncxvq" (UID: "4af511ec-802a-47ed-8715-fb0f87f76549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name topolvm.io not found in the list of registered CSI drivers Jan 17 14:39:12 edgenius microshift[2779]: kubelet I0117 14:39:12.902972 2779 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Jan 17 14:39:12 edgenius microshift[2779]: kubelet I0117 14:39:12.903016 2779 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Jan 17 14:39:12 edgenius microshift[2779]: kubelet I0117 14:39:12.911486 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:12 edgenius microshift[2779]: kubelet I0117 14:39:12.911558 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-715180ea-d66b-4c0e-b754-18397f45e045\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5e71a24a-b569-48e6-b1d1-0bd3dad7cba9\") pod \"edgefilestorage-fd95f9fcb-2l447\" (UID: \"49ac2d99-3c8d-4886-a60e-898d1dfeb9bd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f8048cd195ea7945c9175e791b5c597924c8beb33c4dfdd43b37ce228579f129/globalmount\"" pod="edgenius/edgefilestorage-fd95f9fcb-2l447" Jan 17 14:39:13 edgenius microshift[2779]: kubelet I0117 14:39:13.112895 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:13 edgenius microshift[2779]: kubelet I0117 14:39:13.112931 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-478365f1-474d-43f9-91d6-8dccc0bc53ec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^08ee40e7-35f3-4679-959d-899d3ecce10f\") pod \"edgevariablesubscription-6db6c446c5-vnbbm\" (UID: \"04b78a0a-d69e-4dad-817b-40e4b7b399d2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/bdbcf59cb9c70983a4d6708c0691909d26958bace8201d25ef3cb232f14c427b/globalmount\"" pod="edgenius/edgevariablesubscription-6db6c446c5-vnbbm" Jan 17 14:39:13 edgenius microshift[2779]: kubelet I0117 14:39:13.216190 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:13 edgenius microshift[2779]: kubelet I0117 14:39:13.216227 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-e1f9cacc-84f1-40f4-ab2b-f27c92d9e23d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8a3c0796-01a9-483b-9f84-b0672e69cab7\") pod \"edgeauthzpolicyserver-5b4966b595-stp8v\" (UID: \"8ad2af1d-571d-4352-a43a-8d1511797a50\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/93109835b1d57b9ca36a4d5615c686989d8b412b085396147844a07f9bd1c9e9/globalmount\"" pod="edgenius/edgeauthzpolicyserver-5b4966b595-stp8v" Jan 17 14:39:13 edgenius microshift[2779]: kubelet I0117 14:39:13.316773 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:13 edgenius microshift[2779]: kubelet I0117 14:39:13.316816 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-3b9998c6-944e-44c0-8379-c7f47e615b06\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dce57ee6-2be6-4efd-aa15-d1efd3affbcc\") pod \"edgetyperegistry-587d8f8d84-5w96j\" (UID: \"c073d676-39d2-4584-b032-06f164bb8202\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/ac6bff01da0bd1f0d999b81a06960c7bb5394552cfdddfa0981ce4d894b419de/globalmount\"" pod="edgenius/edgetyperegistry-587d8f8d84-5w96j" Jan 17 14:39:13 edgenius microshift[2779]: kubelet W0117 14:39:13.752811 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73bf4d14_9234_4f3d_a7a1_282e3fc81909.slice/crio-294c4c2b16613a0df5f95b8cca158f15fd801f8227fb7d198182c33298cae79c.scope WatchSource:0}: Error finding container 294c4c2b16613a0df5f95b8cca158f15fd801f8227fb7d198182c33298cae79c: Status 404 returned error can't find the container with id 294c4c2b16613a0df5f95b8cca158f15fd801f8227fb7d198182c33298cae79c Jan 17 14:39:14 edgenius microshift[2779]: kubelet I0117 14:39:14.324095 2779 patch_prober.go:29] interesting pod/router-default-ddc545d88-fnfsb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 17 14:39:14 edgenius microshift[2779]: [-]has-synced failed: reason withheld Jan 17 14:39:14 edgenius microshift[2779]: [+]process-running ok Jan 17 14:39:14 edgenius microshift[2779]: healthz check failed Jan 17 14:39:20 edgenius microshift[2779]: kube-controller-manager E0117 14:39:20.083434 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:39:20 edgenius microshift[2779]: kube-controller-manager I0117 14:39:20.083590 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:39:27 edgenius microshift[2779]: kube-apiserver I0117 14:39:27.265662 2779 controller.go:616] quota admission added evaluator for: ingresses.networking.k8s.io Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.674295 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.674342 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-aa8c4ab6-3c77-4823-b037-ee99ab79a7a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^77833bef-c765-4252-857d-82de71462f0a\") pod \"edgeauthzpolicystore-5889cf9977-sljcq\" (UID: \"5ba3e342-4e77-4728-8617-6cb001d446b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/4f34efa0faac63beb4ccb7adbe377c57686032f9d19a3f5e6d1f70d3da777227/globalmount\"" pod="edgenius/edgeauthzpolicystore-5889cf9977-sljcq" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.774940 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.775133 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-908083bb-d8d3-4ef4-90b9-e3de096e451d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0c63d0cd-8357-47b7-a9a5-7729d68910d6\") pod \"edgeconfigurationservice-db54f49b9-89kfz\" (UID: \"4bf035da-7e84-4e00-8a19-818452f8f30d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/b4f239ddc8612487f117cc6a786dafc6dcd19e23c8fbbdfd79f874a58c35d985/globalmount\"" pod="edgenius/edgeconfigurationservice-db54f49b9-89kfz" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.775210 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.775234 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-b7796e6c-734d-4051-bc6f-80b44887ce39\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6f0d6276-3b3d-4e02-b1f7-8b7118c61f7d\") pod \"edgetyperegistrydb-56b49789c7-xrvhf\" (UID: \"daea4ebe-2c14-4dc4-83de-a4c37d005b23\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/2b10b785a06d84c7acd8cb4ac654f997cdbc8f36c7595e1c1fc5d14afe9a75e1/globalmount\"" pod="edgenius/edgetyperegistrydb-56b49789c7-xrvhf" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.775580 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.775695 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-7ab8728c-5dd2-4404-a896-b0ae5d692ae6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b230b3d1-0252-43a1-b437-772c67b18df6\") pod \"edgesubscriptionservice-6779669c5f-8tgbq\" (UID: \"218615a4-d28f-4014-822d-84c6af570fe2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/ed3fa00802a44bcce98edc58457745643d95c49c3fd7297f4164987550b7111a/globalmount\"" pod="edgenius/edgesubscriptionservice-6779669c5f-8tgbq" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.775838 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.775848 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.775866 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-833494e0-e898-4273-bdf5-4407a1a16caf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6547fc8e-64da-49a8-8335-2c705db2276d\") pod \"edgeauditeventservice-6689859d58-vjwz4\" (UID: \"29d572ed-b570-4b8f-85a0-2b43f8c5cb08\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/d5ff2bf9bf6009de8d06f17762d9f58d9c124820b70d9f2ee5c9ce0bb854e74a/globalmount\"" pod="edgenius/edgeauditeventservice-6689859d58-vjwz4" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.775873 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-76f00de9-a1b9-494b-ab95-1f55febf7413\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b4f8c1fb-3081-4020-98c8-3b01a655bf92\") pod \"edgeconfigurationservice-db54f49b9-89kfz\" (UID: \"4bf035da-7e84-4e00-8a19-818452f8f30d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/8b6f74c496eb61aab6abdeba5c5b375a21ef65b12d6bb17ad1565348e0dbab37/globalmount\"" pod="edgenius/edgeconfigurationservice-db54f49b9-89kfz" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.775721 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.776168 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-53896aae-aac4-45d7-b18f-139128576f5f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7d1a3198-df8e-4b70-86fd-a7c73c962746\") pod \"edgeinfomodel-6db75d77ff-cxgnw\" (UID: \"a583cdea-1a90-4331-ac02-3a01de3fb5b1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/4ff46708d353d8a7a353e3019ea8ffb89edd993bfc2b6d9064d0e505abdb01b3/globalmount\"" pod="edgenius/edgeinfomodel-6db75d77ff-cxgnw" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.776304 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.776328 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-cd188fb5-73ca-4f6d-9e09-1f83142f142f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e2d66fb7-1759-4d0a-adb3-8130920e6a6c\") pod \"edgealarmsubscription-f49965fd-tdd6x\" (UID: \"a7b376b4-0e44-4940-9bcc-8c9ad42b02d9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/91c2e9d83e6ee19389de8f107aaa84fa982470613fbf9f17e9142d29b096de65/globalmount\"" pod="edgenius/edgealarmsubscription-f49965fd-tdd6x" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.776175 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.776545 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-c86e97d4-9d02-47ee-840f-6c3236b9bb20\" (UniqueName: \"kubernetes.io/csi/topolvm.io^33d207d4-742c-4369-b196-4f44d0247eb2\") pod \"edgerouter-5d74457fcf-k7hlw\" (UID: \"deb6d14e-2b25-41bf-a1ab-8b111acd0e78\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/7600452e35eb9b51e2ff831938053500d066999a4728ffe1b0f2c5a963bd971b/globalmount\"" pod="edgenius/edgerouter-5d74457fcf-k7hlw" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.776704 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.776742 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-8343fecb-6ce6-4780-80cb-9684ff788e87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7cc10505-0d87-42a9-841e-a93ef8b6fac4\") pod \"edgeauthadminapi-649c48bb6b-r4ndx\" (UID: \"f2d63a65-c662-4d47-a073-0f1184f6ed0f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/b45ea230f8f44768af9e409968ad0ab74c81950b3ee8b595d28ae33a2439d6ee/globalmount\"" pod="edgenius/edgeauthadminapi-649c48bb6b-r4ndx" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.877964 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.878229 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-7b5f487d-ae5a-4820-b0fe-d3149828d2a1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29dee98b-c64a-473b-b125-b83a6bc97f73\") pod \"edgeinfomodeldb-5fbd9887d8-hdm8j\" (UID: \"20676429-968e-49f3-81fe-4cf06a875c4e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/53486135e7f02807c7c5e0da96f8b1a52d8eed8e5fec5535a2d6a330ceba5c4a/globalmount\"" pod="edgenius/edgeinfomodeldb-5fbd9887d8-hdm8j" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.878260 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.878309 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-89281368-8fe8-4a9b-be69-3254d7e153e1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1269d488-6696-4938-a270-23ad506bc21b\") pod \"edgeauthdb-55f84588f-n9mmq\" (UID: \"bfb4052a-0189-4b30-8f47-80d4485b5ebf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/ea38682fcca09a19d7d30a9308d1de4e0adf3b81374a6d5b227e185d84175d8a/globalmount\"" pod="edgenius/edgeauthdb-55f84588f-n9mmq" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.878924 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.878964 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-2c24f91c-5968-49cb-a325-a9514087828b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8b8d05f9-d673-45f1-b8c4-1464b01ab657\") pod \"edgeauthdb-55f84588f-n9mmq\" (UID: \"bfb4052a-0189-4b30-8f47-80d4485b5ebf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/011f6b7ed6593d564b1345724197483fb9bd54c79da7bfd96500c7acca0f273c/globalmount\"" pod="edgenius/edgeauthdb-55f84588f-n9mmq" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.879361 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.879417 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-a8fb267b-946e-4e11-a0ae-7e7801c4a529\" (UniqueName: \"kubernetes.io/csi/topolvm.io^44ce8ef5-2ca7-4cb7-aadb-45cb3fce179b\") pod \"edgetyperegistrydb-56b49789c7-xrvhf\" (UID: \"daea4ebe-2c14-4dc4-83de-a4c37d005b23\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/abb73fedb36632527561aaffff5a015c3cce0d7df08dbf6b1b9a48e7ebe20b8b/globalmount\"" pod="edgenius/edgetyperegistrydb-56b49789c7-xrvhf" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.879794 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.879855 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-7a2b2701-6804-4389-89bd-96ed13312f78\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2a35518d-487d-498b-91d8-e03848d9d11d\") pod \"edgeauthenticationserver-77667796cc-wz8px\" (UID: \"99254a2a-f130-4eaa-bae0-8f40af8082d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/08ba526482ecb6ffb246cbfa69ffbac5b934e2460e890f19285903e4fbfb885a/globalmount\"" pod="edgenius/edgeauthenticationserver-77667796cc-wz8px" Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.981169 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:27 edgenius microshift[2779]: kubelet I0117 14:39:27.981222 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-6549e31c-5301-4828-8ad7-2312cbc7a4d4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b2853d35-6c8a-438f-afd1-3fa2bbaa6006\") pod \"edgeinfomodeldb-5fbd9887d8-hdm8j\" (UID: \"20676429-968e-49f3-81fe-4cf06a875c4e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/da69d96d53367702bde9a9494e213595ea4b6c3d62d6c7f8718d8efca50c7f3a/globalmount\"" pod="edgenius/edgeinfomodeldb-5fbd9887d8-hdm8j" Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.180230 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.180415 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-1b504a84-7ebc-40cc-b967-923fe4e170c5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b7842446-5ac8-48a0-8883-bbf27ceb03c7\") pod \"edgedeviceapiorchestrator-ff895dcb9-hwzvr\" (UID: \"fdc0b70a-0540-41c9-b14e-29e5d04d9084\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/b0d0a2f77126157903cc968e26d0da708552a2488afc103b127b33909d36a856/globalmount\"" pod="edgenius/edgedeviceapiorchestrator-ff895dcb9-hwzvr" Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.282541 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.282573 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-acb2d71c-7ea5-4a37-a32c-d15e95a1991a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e6180ed-3db9-409f-bf5d-f80b97222fb2\") pod \"edgeapigateway-6669ccbd5d-jffch\" (UID: \"83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/7c0111a6a822f9e02fb021bd35ea9f766cb7fc76523bc1c48af26823e481bd76/globalmount\"" pod="edgenius/edgeapigateway-6669ccbd5d-jffch" Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.381878 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.381924 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-f647bd64-9a4d-43d5-adde-a7181f8ba0c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^77ea3c9f-3784-44ed-b330-af92a498d751\") pod \"edgeeventsubscription-7dd9f64c67-rf99m\" (UID: \"c35cebcb-87fe-4857-b3e5-312a2ee55902\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/9d5ac37c695884e2ad3827f5b110db29f20c3cde44b3bb92f68af98614a13688/globalmount\"" pod="edgenius/edgeeventsubscription-7dd9f64c67-rf99m" Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.483540 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.483589 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-c1a2031d-e768-48b5-8951-eae888ae8f91\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a4dd78ac-2138-44ee-9ad1-b545d77b7aac\") pod \"edgemethodinvocation-7d5cb5d865-mdqtp\" (UID: \"799c5e55-8569-4df9-ad89-16e0d46bb5b7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/313705833c7c2edab0719f2e1e4cad5f4f49f925c3ed91107b39e6e13f02bc28/globalmount\"" pod="edgenius/edgemethodinvocation-7d5cb5d865-mdqtp" Jan 17 14:39:28 edgenius microshift[2779]: kubelet W0117 14:39:28.488432 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ba3e342_4e77_4728_8617_6cb001d446b0.slice/crio-af49e12e06133647acf0693789f94c042d3fdcc625d13928dda15ce3ece07829.scope WatchSource:0}: Error finding container af49e12e06133647acf0693789f94c042d3fdcc625d13928dda15ce3ece07829: Status 404 returned error can't find the container with id af49e12e06133647acf0693789f94c042d3fdcc625d13928dda15ce3ece07829 Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.686275 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.686318 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-801fdff1-bb7b-4860-95e8-f0d41ef257b2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f17326a1-6d39-4223-9090-f13784438cb4\") pod \"edgeplatformeventsubscription-5699bfd6bf-lhnls\" (UID: \"30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/eee92cf9d860fedc0e650a89e0172701226ecef0f78dd33a13a7e8191248494e/globalmount\"" pod="edgenius/edgeplatformeventsubscription-5699bfd6bf-lhnls" Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.889192 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.889234 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-bfccbc83-78d0-46d3-bb90-639eb139f067\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ad3364d5-8738-405c-96db-378f09e29fe8\") pod \"edgeauthadminui-bbb77c5f7-b5jrg\" (UID: \"b9ab877e-4277-412f-9d76-f3833c0807fc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/0be015ba409f85c12ec1f3ee789fcdcbc46b5d1670a7b2505e97e8493492f560/globalmount\"" pod="edgenius/edgeauthadminui-bbb77c5f7-b5jrg" Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.889637 2779 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 17 14:39:28 edgenius microshift[2779]: kubelet I0117 14:39:28.889669 2779 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-d32a6692-a6b3-4263-a0af-0c4e0897ea09\" (UniqueName: \"kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39\") pod \"edge-broker-7fd9b99b6c-ncxvq\" (UID: \"4af511ec-802a-47ed-8715-fb0f87f76549\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/418b4bfcc3eaf7442317901bbfb23302601e9272b9a93b75d6964dd980e1476a/globalmount\"" pod="edgenius/edge-broker-7fd9b99b6c-ncxvq" Jan 17 14:39:29 edgenius microshift[2779]: kubelet W0117 14:39:29.043720 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29d572ed_b570_4b8f_85a0_2b43f8c5cb08.slice/crio-a547082b9e8b3c7b60995ef45f76f3bc4023c048660dc48e7faa9f93cb509043.scope WatchSource:0}: Error finding container a547082b9e8b3c7b60995ef45f76f3bc4023c048660dc48e7faa9f93cb509043: Status 404 returned error can't find the container with id a547082b9e8b3c7b60995ef45f76f3bc4023c048660dc48e7faa9f93cb509043 Jan 17 14:39:29 edgenius microshift[2779]: kubelet W0117 14:39:29.114989 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeb6d14e_2b25_41bf_a1ab_8b111acd0e78.slice/crio-3bff2d23fd1291fad3a2a647edcdc0e8552f6c97d8d0586554bb449e91ae97f4.scope WatchSource:0}: Error finding container 3bff2d23fd1291fad3a2a647edcdc0e8552f6c97d8d0586554bb449e91ae97f4: Status 404 returned error can't find the container with id 3bff2d23fd1291fad3a2a647edcdc0e8552f6c97d8d0586554bb449e91ae97f4 Jan 17 14:39:29 edgenius microshift[2779]: kubelet W0117 14:39:29.392043 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2d63a65_c662_4d47_a073_0f1184f6ed0f.slice/crio-1fe7bfc5ce48c6f8a1a89704a504542aa452dc46d876f3fd274fa93d42712355.scope WatchSource:0}: Error finding container 1fe7bfc5ce48c6f8a1a89704a504542aa452dc46d876f3fd274fa93d42712355: Status 404 returned error can't find the container with id 1fe7bfc5ce48c6f8a1a89704a504542aa452dc46d876f3fd274fa93d42712355 Jan 17 14:39:29 edgenius microshift[2779]: kubelet W0117 14:39:29.746299 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7b376b4_0e44_4940_9bcc_8c9ad42b02d9.slice/crio-c063bdaa61ac26ae647a1113065c22273a5e870bfdff0e0b3fcd62a51870b025.scope WatchSource:0}: Error finding container c063bdaa61ac26ae647a1113065c22273a5e870bfdff0e0b3fcd62a51870b025: Status 404 returned error can't find the container with id c063bdaa61ac26ae647a1113065c22273a5e870bfdff0e0b3fcd62a51870b025 Jan 17 14:39:30 edgenius microshift[2779]: kubelet W0117 14:39:30.735654 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfb4052a_0189_4b30_8f47_80d4485b5ebf.slice/crio-228c0903031d7a92bd3ec2b09f2386b9a9c24a5fa693d590d535e22e8562a9d9.scope WatchSource:0}: Error finding container 228c0903031d7a92bd3ec2b09f2386b9a9c24a5fa693d590d535e22e8562a9d9: Status 404 returned error can't find the container with id 228c0903031d7a92bd3ec2b09f2386b9a9c24a5fa693d590d535e22e8562a9d9 Jan 17 14:39:31 edgenius microshift[2779]: kubelet W0117 14:39:31.246504 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99254a2a_f130_4eaa_bae0_8f40af8082d8.slice/crio-55d741dd56787f5f5f990f7696acfedb0a68374a95b87524e7f8dfedbec4c6bd.scope WatchSource:0}: Error finding container 55d741dd56787f5f5f990f7696acfedb0a68374a95b87524e7f8dfedbec4c6bd: Status 404 returned error can't find the container with id 55d741dd56787f5f5f990f7696acfedb0a68374a95b87524e7f8dfedbec4c6bd Jan 17 14:39:35 edgenius microshift[2779]: kube-controller-manager E0117 14:39:35.084989 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:39:35 edgenius microshift[2779]: kube-controller-manager I0117 14:39:35.085743 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:39:43 edgenius microshift[2779]: kube-apiserver W0117 14:39:43.161187 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:39:43 edgenius microshift[2779]: kube-apiserver E0117 14:39:43.161414 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:39:50 edgenius microshift[2779]: kube-controller-manager E0117 14:39:50.085499 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:39:50 edgenius microshift[2779]: kube-controller-manager I0117 14:39:50.085655 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:39:59 edgenius microshift[2779]: kube-apiserver W0117 14:39:59.459790 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:39:59 edgenius microshift[2779]: kube-apiserver E0117 14:39:59.460265 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:40:05 edgenius microshift[2779]: kube-controller-manager E0117 14:40:05.086149 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:40:05 edgenius microshift[2779]: kube-controller-manager I0117 14:40:05.086406 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:40:20 edgenius microshift[2779]: kube-controller-manager E0117 14:40:20.087473 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:40:20 edgenius microshift[2779]: kube-controller-manager I0117 14:40:20.087636 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:40:29 edgenius microshift[2779]: kube-apiserver W0117 14:40:29.612208 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:40:29 edgenius microshift[2779]: kube-apiserver E0117 14:40:29.612253 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:40:35 edgenius microshift[2779]: kube-controller-manager E0117 14:40:35.088433 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:40:35 edgenius microshift[2779]: kube-controller-manager I0117 14:40:35.088560 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:40:41 edgenius microshift[2779]: kube-apiserver W0117 14:40:41.057670 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:40:41 edgenius microshift[2779]: kube-apiserver E0117 14:40:41.057956 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:40:50 edgenius microshift[2779]: kube-controller-manager E0117 14:40:50.089258 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:40:50 edgenius microshift[2779]: kube-controller-manager I0117 14:40:50.089419 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:40:58 edgenius microshift[2779]: kubelet E0117 14:40:58.849356 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgetyperegistrydb pvc-volume-edgetyperegistrydb01], unattached volumes=[kube-api-access-86w9s pvc-volume-edgetyperegistrydb pvc-volume-edgetyperegistrydb01 edgetyperegistrydb-secret]: timed out waiting for the condition" pod="edgenius/edgetyperegistrydb-56b49789c7-xrvhf" Jan 17 14:40:58 edgenius microshift[2779]: kubelet E0117 14:40:58.849635 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgetyperegistrydb pvc-volume-edgetyperegistrydb01], unattached volumes=[kube-api-access-86w9s pvc-volume-edgetyperegistrydb pvc-volume-edgetyperegistrydb01 edgetyperegistrydb-secret]: timed out waiting for the condition" pod="edgenius/edgetyperegistrydb-56b49789c7-xrvhf" podUID=daea4ebe-2c14-4dc4-83de-a4c37d005b23 Jan 17 14:40:58 edgenius microshift[2779]: kubelet E0117 14:40:58.885507 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgesubscriptionservice], unattached volumes=[edgesubscriptionservice-secret kube-api-access-xsp6r pvc-volume-edgesubscriptionservice]: timed out waiting for the condition" pod="edgenius/edgesubscriptionservice-6779669c5f-8tgbq" Jan 17 14:40:58 edgenius microshift[2779]: kubelet E0117 14:40:58.885586 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgesubscriptionservice], unattached volumes=[edgesubscriptionservice-secret kube-api-access-xsp6r pvc-volume-edgesubscriptionservice]: timed out waiting for the condition" pod="edgenius/edgesubscriptionservice-6779669c5f-8tgbq" podUID=218615a4-d28f-4014-822d-84c6af570fe2 Jan 17 14:40:58 edgenius microshift[2779]: kubelet E0117 14:40:58.980425 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgetyperegistry], unattached volumes=[edgetyperegistry-secret kube-api-access-w5vx2 pvc-volume-edgetyperegistry]: timed out waiting for the condition" pod="edgenius/edgetyperegistry-587d8f8d84-5w96j" Jan 17 14:40:58 edgenius microshift[2779]: kubelet E0117 14:40:58.980490 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgetyperegistry], unattached volumes=[edgetyperegistry-secret kube-api-access-w5vx2 pvc-volume-edgetyperegistry]: timed out waiting for the condition" pod="edgenius/edgetyperegistry-587d8f8d84-5w96j" podUID=c073d676-39d2-4584-b032-06f164bb8202 Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.040562 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgeinfomodeldb pvc-volume-edgeinfomodeldb01], unattached volumes=[pvc-volume-edgeinfomodeldb pvc-volume-edgeinfomodeldb01 edgeinfomodeldb-secret kube-api-access-zmlp6]: timed out waiting for the condition" pod="edgenius/edgeinfomodeldb-5fbd9887d8-hdm8j" Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.041129 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgeinfomodeldb pvc-volume-edgeinfomodeldb01], unattached volumes=[pvc-volume-edgeinfomodeldb pvc-volume-edgeinfomodeldb01 edgeinfomodeldb-secret kube-api-access-zmlp6]: timed out waiting for the condition" pod="edgenius/edgeinfomodeldb-5fbd9887d8-hdm8j" podUID=20676429-968e-49f3-81fe-4cf06a875c4e Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.048307 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edge-broker], unattached volumes=[pvc-volume-edge-broker password-file edge-broker-secret kube-api-access-hw2xb]: timed out waiting for the condition" pod="edgenius/edge-broker-7fd9b99b6c-ncxvq" Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.048358 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edge-broker], unattached volumes=[pvc-volume-edge-broker password-file edge-broker-secret kube-api-access-hw2xb]: timed out waiting for the condition" pod="edgenius/edge-broker-7fd9b99b6c-ncxvq" podUID=4af511ec-802a-47ed-8715-fb0f87f76549 Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.068083 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgeapigateway], unattached volumes=[pvc-volume-edgeapigateway edgeapigateway-secret kube-api-access-9mbnm]: timed out waiting for the condition" pod="edgenius/edgeapigateway-6669ccbd5d-jffch" Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.068192 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgeapigateway], unattached volumes=[pvc-volume-edgeapigateway edgeapigateway-secret kube-api-access-9mbnm]: timed out waiting for the condition" pod="edgenius/edgeapigateway-6669ccbd5d-jffch" podUID=83a2f7bb-54a2-4a57-ab2d-f2ecfa7a75f7 Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.075984 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgedeviceapiorchestrator], unattached volumes=[kube-api-access-gnhsz pvc-volume-edgedeviceapiorchestrator edgedeviceapiorchestrator-secret]: timed out waiting for the condition" pod="edgenius/edgedeviceapiorchestrator-ff895dcb9-hwzvr" Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.076030 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgedeviceapiorchestrator], unattached volumes=[kube-api-access-gnhsz pvc-volume-edgedeviceapiorchestrator edgedeviceapiorchestrator-secret]: timed out waiting for the condition" pod="edgenius/edgedeviceapiorchestrator-ff895dcb9-hwzvr" podUID=fdc0b70a-0540-41c9-b14e-29e5d04d9084 Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.132570 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgeeventsubscription], unattached volumes=[pvc-volume-edgeeventsubscription edgeeventsubscription-secret kube-api-access-nntnd]: timed out waiting for the condition" pod="edgenius/edgeeventsubscription-7dd9f64c67-rf99m" Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.132629 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgeeventsubscription], unattached volumes=[pvc-volume-edgeeventsubscription edgeeventsubscription-secret kube-api-access-nntnd]: timed out waiting for the condition" pod="edgenius/edgeeventsubscription-7dd9f64c67-rf99m" podUID=c35cebcb-87fe-4857-b3e5-312a2ee55902 Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.149961 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgeauthadminui], unattached volumes=[edgeauthadminui-secret kube-api-access-xw4pf pvc-volume-edgeauthadminui]: timed out waiting for the condition" pod="edgenius/edgeauthadminui-bbb77c5f7-b5jrg" Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.150028 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgeauthadminui], unattached volumes=[edgeauthadminui-secret kube-api-access-xw4pf pvc-volume-edgeauthadminui]: timed out waiting for the condition" pod="edgenius/edgeauthadminui-bbb77c5f7-b5jrg" podUID=b9ab877e-4277-412f-9d76-f3833c0807fc Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.190672 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgefilestorage], unattached volumes=[edgefilestorage-secret kube-api-access-jq8zm pvc-volume-edgefilestorage]: timed out waiting for the condition" pod="edgenius/edgefilestorage-fd95f9fcb-2l447" Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.190726 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgefilestorage], unattached volumes=[edgefilestorage-secret kube-api-access-jq8zm pvc-volume-edgefilestorage]: timed out waiting for the condition" pod="edgenius/edgefilestorage-fd95f9fcb-2l447" podUID=49ac2d99-3c8d-4886-a60e-898d1dfeb9bd Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.202653 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgevariablesubscription], unattached volumes=[pvc-volume-edgevariablesubscription edgevariablesubscription-secret kube-api-access-fpct9]: timed out waiting for the condition" pod="edgenius/edgevariablesubscription-6db6c446c5-vnbbm" Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.202704 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgevariablesubscription], unattached volumes=[pvc-volume-edgevariablesubscription edgevariablesubscription-secret kube-api-access-fpct9]: timed out waiting for the condition" pod="edgenius/edgevariablesubscription-6db6c446c5-vnbbm" podUID=04b78a0a-d69e-4dad-817b-40e4b7b399d2 Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.227331 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgemethodinvocation], unattached volumes=[edgemethodinvocation-secret kube-api-access-vrwhf pvc-volume-edgemethodinvocation]: timed out waiting for the condition" pod="edgenius/edgemethodinvocation-7d5cb5d865-mdqtp" Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.227382 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgemethodinvocation], unattached volumes=[edgemethodinvocation-secret kube-api-access-vrwhf pvc-volume-edgemethodinvocation]: timed out waiting for the condition" pod="edgenius/edgemethodinvocation-7d5cb5d865-mdqtp" podUID=799c5e55-8569-4df9-ad89-16e0d46bb5b7 Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.233682 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgeauthzpolicyserver], unattached volumes=[edgeauthzpolicyserver-secret kube-api-access-btrp4 pvc-volume-edgeauthzpolicyserver]: timed out waiting for the condition" pod="edgenius/edgeauthzpolicyserver-5b4966b595-stp8v" Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.233724 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgeauthzpolicyserver], unattached volumes=[edgeauthzpolicyserver-secret kube-api-access-btrp4 pvc-volume-edgeauthzpolicyserver]: timed out waiting for the condition" pod="edgenius/edgeauthzpolicyserver-5b4966b595-stp8v" podUID=8ad2af1d-571d-4352-a43a-8d1511797a50 Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.239988 2779 kubelet.go:1754] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pvc-volume-edgeplatformeventsubscription], unattached volumes=[pvc-volume-edgeplatformeventsubscription edgeplatformeventsubscription-secret kube-api-access-sddw7]: timed out waiting for the condition" pod="edgenius/edgeplatformeventsubscription-5699bfd6bf-lhnls" Jan 17 14:40:59 edgenius microshift[2779]: kubelet E0117 14:40:59.240047 2779 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[pvc-volume-edgeplatformeventsubscription], unattached volumes=[pvc-volume-edgeplatformeventsubscription edgeplatformeventsubscription-secret kube-api-access-sddw7]: timed out waiting for the condition" pod="edgenius/edgeplatformeventsubscription-5699bfd6bf-lhnls" podUID=30bcd67e-9b7a-49d7-9cb9-8f54fdbb106f Jan 17 14:41:04 edgenius microshift[2779]: kube-apiserver W0117 14:41:04.131695 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:41:04 edgenius microshift[2779]: kube-apiserver E0117 14:41:04.131950 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:41:05 edgenius microshift[2779]: kube-controller-manager E0117 14:41:05.089667 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:41:05 edgenius microshift[2779]: kube-controller-manager I0117 14:41:05.089796 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:41:10 edgenius microshift[2779]: kubelet W0117 14:41:10.903240 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaea4ebe_2c14_4dc4_83de_a4c37d005b23.slice/crio-e1cbb6be5cffa86bb7cb76929edd661b2abd1a257bd38864f9c55fef39762b49.scope WatchSource:0}: Error finding container e1cbb6be5cffa86bb7cb76929edd661b2abd1a257bd38864f9c55fef39762b49: Status 404 returned error can't find the container with id e1cbb6be5cffa86bb7cb76929edd661b2abd1a257bd38864f9c55fef39762b49 Jan 17 14:41:12 edgenius microshift[2779]: kubelet W0117 14:41:12.004594 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49ac2d99_3c8d_4886_a60e_898d1dfeb9bd.slice/crio-b2602e58f8b0487841898ca6a74fe824aa356285cdd104ead90be386ac7b892c.scope WatchSource:0}: Error finding container b2602e58f8b0487841898ca6a74fe824aa356285cdd104ead90be386ac7b892c: Status 404 returned error can't find the container with id b2602e58f8b0487841898ca6a74fe824aa356285cdd104ead90be386ac7b892c Jan 17 14:41:13 edgenius microshift[2779]: kubelet W0117 14:41:13.129946 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30bcd67e_9b7a_49d7_9cb9_8f54fdbb106f.slice/crio-ff6d8c416c7f5c773b61e19584b01930d6edef97975989564381fd4ae1911590.scope WatchSource:0}: Error finding container ff6d8c416c7f5c773b61e19584b01930d6edef97975989564381fd4ae1911590: Status 404 returned error can't find the container with id ff6d8c416c7f5c773b61e19584b01930d6edef97975989564381fd4ae1911590 Jan 17 14:41:13 edgenius microshift[2779]: kubelet W0117 14:41:13.210248 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad2af1d_571d_4352_a43a_8d1511797a50.slice/crio-210ee0887fb9f860081bac10cad97d62de71c9a5aeff633a311eabbbe937d737.scope WatchSource:0}: Error finding container 210ee0887fb9f860081bac10cad97d62de71c9a5aeff633a311eabbbe937d737: Status 404 returned error can't find the container with id 210ee0887fb9f860081bac10cad97d62de71c9a5aeff633a311eabbbe937d737 Jan 17 14:41:15 edgenius microshift[2779]: kubelet W0117 14:41:15.692701 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc35cebcb_87fe_4857_b3e5_312a2ee55902.slice/crio-5a4363d2bdabe81805a5e84eef17cac98fabb211719cb20497b2452bab55171e.scope WatchSource:0}: Error finding container 5a4363d2bdabe81805a5e84eef17cac98fabb211719cb20497b2452bab55171e: Status 404 returned error can't find the container with id 5a4363d2bdabe81805a5e84eef17cac98fabb211719cb20497b2452bab55171e Jan 17 14:41:15 edgenius microshift[2779]: kubelet W0117 14:41:15.812722 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc073d676_39d2_4584_b032_06f164bb8202.slice/crio-5b52bef12042fc703022180d33991a3109af5f70b2dd14722d84c0854da9a513.scope WatchSource:0}: Error finding container 5b52bef12042fc703022180d33991a3109af5f70b2dd14722d84c0854da9a513: Status 404 returned error can't find the container with id 5b52bef12042fc703022180d33991a3109af5f70b2dd14722d84c0854da9a513 Jan 17 14:41:16 edgenius microshift[2779]: kubelet W0117 14:41:16.214897 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod799c5e55_8569_4df9_ad89_16e0d46bb5b7.slice/crio-0d14587bcd060b124ead1bb8f39b839ed9407162725e6bd9fe44806c36116fed.scope WatchSource:0}: Error finding container 0d14587bcd060b124ead1bb8f39b839ed9407162725e6bd9fe44806c36116fed: Status 404 returned error can't find the container with id 0d14587bcd060b124ead1bb8f39b839ed9407162725e6bd9fe44806c36116fed Jan 17 14:41:16 edgenius microshift[2779]: kubelet W0117 14:41:16.388369 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20676429_968e_49f3_81fe_4cf06a875c4e.slice/crio-657bd1f5e7855b4014176b306d1c1b88da74da92b03c05a7a6e7043f03dfe89f.scope WatchSource:0}: Error finding container 657bd1f5e7855b4014176b306d1c1b88da74da92b03c05a7a6e7043f03dfe89f: Status 404 returned error can't find the container with id 657bd1f5e7855b4014176b306d1c1b88da74da92b03c05a7a6e7043f03dfe89f Jan 17 14:41:17 edgenius microshift[2779]: kubelet W0117 14:41:17.206520 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9ab877e_4277_412f_9d76_f3833c0807fc.slice/crio-3ea21b0fe7961d9922158401620e5a0bbaf77af886f1ea4bfe0b596226db53a3.scope WatchSource:0}: Error finding container 3ea21b0fe7961d9922158401620e5a0bbaf77af886f1ea4bfe0b596226db53a3: Status 404 returned error can't find the container with id 3ea21b0fe7961d9922158401620e5a0bbaf77af886f1ea4bfe0b596226db53a3 Jan 17 14:41:17 edgenius microshift[2779]: kubelet W0117 14:41:17.224046 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83a2f7bb_54a2_4a57_ab2d_f2ecfa7a75f7.slice/crio-e3483a4b7a1c88f20d5e96db1f30ce4e44273ec6ef4ce6a39e34b99a89773af7.scope WatchSource:0}: Error finding container e3483a4b7a1c88f20d5e96db1f30ce4e44273ec6ef4ce6a39e34b99a89773af7: Status 404 returned error can't find the container with id e3483a4b7a1c88f20d5e96db1f30ce4e44273ec6ef4ce6a39e34b99a89773af7 Jan 17 14:41:17 edgenius microshift[2779]: kubelet W0117 14:41:17.344045 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r293742168cee4910a1e1ca4f3a7ddfcc.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r293742168cee4910a1e1ca4f3a7ddfcc.service: no such file or directory Jan 17 14:41:17 edgenius microshift[2779]: kubelet W0117 14:41:17.344112 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r293742168cee4910a1e1ca4f3a7ddfcc.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r293742168cee4910a1e1ca4f3a7ddfcc.service: no such file or directory Jan 17 14:41:17 edgenius microshift[2779]: kubelet W0117 14:41:17.344139 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r293742168cee4910a1e1ca4f3a7ddfcc.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r293742168cee4910a1e1ca4f3a7ddfcc.service: no such file or directory Jan 17 14:41:17 edgenius microshift[2779]: kubelet W0117 14:41:17.344156 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r293742168cee4910a1e1ca4f3a7ddfcc.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r293742168cee4910a1e1ca4f3a7ddfcc.service: no such file or directory Jan 17 14:41:17 edgenius microshift[2779]: kubelet W0117 14:41:17.344173 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-r293742168cee4910a1e1ca4f3a7ddfcc.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-r293742168cee4910a1e1ca4f3a7ddfcc.service: no such file or directory Jan 17 14:41:17 edgenius microshift[2779]: kubelet W0117 14:41:17.392851 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4af511ec_802a_47ed_8715_fb0f87f76549.slice/crio-daac8e149c09838a7c315351715b4ecdcf5d9ce3265e18d0c6459955b716a343.scope WatchSource:0}: Error finding container daac8e149c09838a7c315351715b4ecdcf5d9ce3265e18d0c6459955b716a343: Status 404 returned error can't find the container with id daac8e149c09838a7c315351715b4ecdcf5d9ce3265e18d0c6459955b716a343 Jan 17 14:41:17 edgenius microshift[2779]: kubelet W0117 14:41:17.599263 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod218615a4_d28f_4014_822d_84c6af570fe2.slice/crio-9587b11f516325c83a32441b0aa4d23024e81765e2d6d398e65cbb48e597803a.scope WatchSource:0}: Error finding container 9587b11f516325c83a32441b0aa4d23024e81765e2d6d398e65cbb48e597803a: Status 404 returned error can't find the container with id 9587b11f516325c83a32441b0aa4d23024e81765e2d6d398e65cbb48e597803a Jan 17 14:41:17 edgenius microshift[2779]: kubelet W0117 14:41:17.996737 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04b78a0a_d69e_4dad_817b_40e4b7b399d2.slice/crio-be89d1a34500dfef9be3561388424c3e931d5aff50c35306195d5aa7512f64c7.scope WatchSource:0}: Error finding container be89d1a34500dfef9be3561388424c3e931d5aff50c35306195d5aa7512f64c7: Status 404 returned error can't find the container with id be89d1a34500dfef9be3561388424c3e931d5aff50c35306195d5aa7512f64c7 Jan 17 14:41:18 edgenius microshift[2779]: kubelet W0117 14:41:18.243002 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc0b70a_0540_41c9_b14e_29e5d04d9084.slice/crio-0b25b406d04c9a39b5365a9d49e11daf23fecad7aff2551ba87d1c07f6ee2f4e.scope WatchSource:0}: Error finding container 0b25b406d04c9a39b5365a9d49e11daf23fecad7aff2551ba87d1c07f6ee2f4e: Status 404 returned error can't find the container with id 0b25b406d04c9a39b5365a9d49e11daf23fecad7aff2551ba87d1c07f6ee2f4e Jan 17 14:41:20 edgenius microshift[2779]: kube-controller-manager E0117 14:41:20.091607 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:41:20 edgenius microshift[2779]: kube-controller-manager I0117 14:41:20.092211 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:41:21 edgenius microshift[2779]: kubelet W0117 14:41:21.126105 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-rcc49c75ade4b4bb9bc55c825ff019482.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-rcc49c75ade4b4bb9bc55c825ff019482.service: no such file or directory Jan 17 14:41:21 edgenius microshift[2779]: kubelet W0117 14:41:21.126334 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-rcc49c75ade4b4bb9bc55c825ff019482.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-rcc49c75ade4b4bb9bc55c825ff019482.service: no such file or directory Jan 17 14:41:21 edgenius microshift[2779]: kubelet W0117 14:41:21.126371 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-rcc49c75ade4b4bb9bc55c825ff019482.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-rcc49c75ade4b4bb9bc55c825ff019482.service: no such file or directory Jan 17 14:41:21 edgenius microshift[2779]: kubelet W0117 14:41:21.126386 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-rcc49c75ade4b4bb9bc55c825ff019482.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-rcc49c75ade4b4bb9bc55c825ff019482.service: no such file or directory Jan 17 14:41:21 edgenius microshift[2779]: kubelet W0117 14:41:21.126412 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-rcc49c75ade4b4bb9bc55c825ff019482.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-rcc49c75ade4b4bb9bc55c825ff019482.service: no such file or directory Jan 17 14:41:24 edgenius microshift[2779]: kubelet W0117 14:41:24.240180 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-rbae8b19bdf364e55a20bd20a916fb2ad.service": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent /sys/fs/cgroup/memory/system.slice/run-rbae8b19bdf364e55a20bd20a916fb2ad.service: no such file or directory Jan 17 14:41:24 edgenius microshift[2779]: kubelet W0117 14:41:24.240942 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-rbae8b19bdf364e55a20bd20a916fb2ad.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-rbae8b19bdf364e55a20bd20a916fb2ad.service: no such file or directory Jan 17 14:41:24 edgenius microshift[2779]: kubelet W0117 14:41:24.240974 2779 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-rbae8b19bdf364e55a20bd20a916fb2ad.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-rbae8b19bdf364e55a20bd20a916fb2ad.service: no such file or directory Jan 17 14:41:35 edgenius microshift[2779]: kube-controller-manager E0117 14:41:35.091381 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:41:35 edgenius microshift[2779]: kube-controller-manager I0117 14:41:35.091701 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:41:39 edgenius microshift[2779]: kube-apiserver W0117 14:41:39.881249 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:41:39 edgenius microshift[2779]: kube-apiserver E0117 14:41:39.881436 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:41:46 edgenius microshift[2779]: kube-apiserver W0117 14:41:46.744973 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:41:46 edgenius microshift[2779]: kube-apiserver E0117 14:41:46.745019 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:41:50 edgenius microshift[2779]: kube-controller-manager E0117 14:41:50.091791 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:41:50 edgenius microshift[2779]: kube-controller-manager I0117 14:41:50.091923 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:42:05 edgenius microshift[2779]: kube-controller-manager E0117 14:42:05.092422 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:42:05 edgenius microshift[2779]: kube-controller-manager I0117 14:42:05.092521 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:42:19 edgenius microshift[2779]: kube-controller-manager I0117 14:42:19.576195 2779 event.go:294] "Event occurred" object="edgenius/edge-broker-7fd9b99b6c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: edge-broker-7fd9b99b6c-ncnqm" Jan 17 14:42:19 edgenius microshift[2779]: kubelet I0117 14:42:19.722062 2779 topology_manager.go:205] "Topology Admit Handler" Jan 17 14:42:19 edgenius microshift[2779]: kubelet I0117 14:42:19.932260 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"password-file\" (UniqueName: \"kubernetes.io/projected/eacc9d0b-a261-4a3e-854e-d487e5599437-password-file\") pod \"edge-broker-7fd9b99b6c-ncnqm\" (UID: \"eacc9d0b-a261-4a3e-854e-d487e5599437\") " pod="edgenius/edge-broker-7fd9b99b6c-ncnqm" Jan 17 14:42:19 edgenius microshift[2779]: kubelet I0117 14:42:19.932343 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edge-broker-secret\" (UniqueName: \"kubernetes.io/secret/eacc9d0b-a261-4a3e-854e-d487e5599437-edge-broker-secret\") pod \"edge-broker-7fd9b99b6c-ncnqm\" (UID: \"eacc9d0b-a261-4a3e-854e-d487e5599437\") " pod="edgenius/edge-broker-7fd9b99b6c-ncnqm" Jan 17 14:42:19 edgenius microshift[2779]: kubelet I0117 14:42:19.932382 2779 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9mv4\" (UniqueName: \"kubernetes.io/projected/eacc9d0b-a261-4a3e-854e-d487e5599437-kube-api-access-m9mv4\") pod \"edge-broker-7fd9b99b6c-ncnqm\" (UID: \"eacc9d0b-a261-4a3e-854e-d487e5599437\") " pod="edgenius/edge-broker-7fd9b99b6c-ncnqm" Jan 17 14:42:20 edgenius microshift[2779]: kube-controller-manager E0117 14:42:20.092929 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:42:20 edgenius microshift[2779]: kube-controller-manager I0117 14:42:20.093811 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.129061 2779 scope.go:115] "RemoveContainer" containerID="73fe1010ce5cf00c1fdbfa0de45e55b7bd8d8447a82bb55ebf3143a0b8021ef6" Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.165236 2779 scope.go:115] "RemoveContainer" containerID="73fe1010ce5cf00c1fdbfa0de45e55b7bd8d8447a82bb55ebf3143a0b8021ef6" Jan 17 14:42:20 edgenius microshift[2779]: kubelet E0117 14:42:20.166430 2779 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73fe1010ce5cf00c1fdbfa0de45e55b7bd8d8447a82bb55ebf3143a0b8021ef6\": container with ID starting with 73fe1010ce5cf00c1fdbfa0de45e55b7bd8d8447a82bb55ebf3143a0b8021ef6 not found: ID does not exist" containerID="73fe1010ce5cf00c1fdbfa0de45e55b7bd8d8447a82bb55ebf3143a0b8021ef6" Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.166472 2779 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:73fe1010ce5cf00c1fdbfa0de45e55b7bd8d8447a82bb55ebf3143a0b8021ef6} err="failed to get container status \"73fe1010ce5cf00c1fdbfa0de45e55b7bd8d8447a82bb55ebf3143a0b8021ef6\": rpc error: code = NotFound desc = could not find container \"73fe1010ce5cf00c1fdbfa0de45e55b7bd8d8447a82bb55ebf3143a0b8021ef6\": container with ID starting with 73fe1010ce5cf00c1fdbfa0de45e55b7bd8d8447a82bb55ebf3143a0b8021ef6 not found: ID does not exist" Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.236217 2779 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"pvc-volume-edge-broker\" (UniqueName: \"kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39\") pod \"4af511ec-802a-47ed-8715-fb0f87f76549\" (UID: \"4af511ec-802a-47ed-8715-fb0f87f76549\") " Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.236275 2779 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"password-file\" (UniqueName: \"kubernetes.io/projected/4af511ec-802a-47ed-8715-fb0f87f76549-password-file\") pod \"4af511ec-802a-47ed-8715-fb0f87f76549\" (UID: \"4af511ec-802a-47ed-8715-fb0f87f76549\") " Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.236300 2779 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"edge-broker-secret\" (UniqueName: \"kubernetes.io/secret/4af511ec-802a-47ed-8715-fb0f87f76549-edge-broker-secret\") pod \"4af511ec-802a-47ed-8715-fb0f87f76549\" (UID: \"4af511ec-802a-47ed-8715-fb0f87f76549\") " Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.236336 2779 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw2xb\" (UniqueName: \"kubernetes.io/projected/4af511ec-802a-47ed-8715-fb0f87f76549-kube-api-access-hw2xb\") pod \"4af511ec-802a-47ed-8715-fb0f87f76549\" (UID: \"4af511ec-802a-47ed-8715-fb0f87f76549\") " Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.252349 2779 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af511ec-802a-47ed-8715-fb0f87f76549-kube-api-access-hw2xb" (OuterVolumeSpecName: "kube-api-access-hw2xb") pod "4af511ec-802a-47ed-8715-fb0f87f76549" (UID: "4af511ec-802a-47ed-8715-fb0f87f76549"). InnerVolumeSpecName "kube-api-access-hw2xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.254475 2779 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^30b7191e-8ac1-4148-b48e-20bef8849d39" (OuterVolumeSpecName: "pvc-volume-edge-broker") pod "4af511ec-802a-47ed-8715-fb0f87f76549" (UID: "4af511ec-802a-47ed-8715-fb0f87f76549"). InnerVolumeSpecName "pvc-d32a6692-a6b3-4263-a0af-0c4e0897ea09". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.259834 2779 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4af511ec-802a-47ed-8715-fb0f87f76549-edge-broker-secret" (OuterVolumeSpecName: "edge-broker-secret") pod "4af511ec-802a-47ed-8715-fb0f87f76549" (UID: "4af511ec-802a-47ed-8715-fb0f87f76549"). InnerVolumeSpecName "edge-broker-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.262298 2779 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af511ec-802a-47ed-8715-fb0f87f76549-password-file" (OuterVolumeSpecName: "password-file") pod "4af511ec-802a-47ed-8715-fb0f87f76549" (UID: "4af511ec-802a-47ed-8715-fb0f87f76549"). InnerVolumeSpecName "password-file". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.337195 2779 reconciler.go:399] "Volume detached for volume \"password-file\" (UniqueName: \"kubernetes.io/projected/4af511ec-802a-47ed-8715-fb0f87f76549-password-file\") on node \"edgenius\" DevicePath \"\"" Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.337255 2779 reconciler.go:399] "Volume detached for volume \"edge-broker-secret\" (UniqueName: \"kubernetes.io/secret/4af511ec-802a-47ed-8715-fb0f87f76549-edge-broker-secret\") on node \"edgenius\" DevicePath \"\"" Jan 17 14:42:20 edgenius microshift[2779]: kubelet I0117 14:42:20.337269 2779 reconciler.go:399] "Volume detached for volume \"kube-api-access-hw2xb\" (UniqueName: \"kubernetes.io/projected/4af511ec-802a-47ed-8715-fb0f87f76549-kube-api-access-hw2xb\") on node \"edgenius\" DevicePath \"\"" Jan 17 14:42:20 edgenius microshift[2779]: kube-apiserver E0117 14:42:20.458431 2779 watch.go:270] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0xc01625b980), InnerCloseNotifierFlusher:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*filters.auditResponseWriter)(0xc017af8300), InnerCloseNotifierFlusher:(*http2.responseWriter)(0xc01c0f65f8)}}, encoder:(*versioning.codec)(0xc01c6f03c0), memAllocator:(*runtime.Allocator)(0xc003260180)}) Jan 17 14:42:20 edgenius microshift[2779]: kubelet W0117 14:42:20.876335 2779 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeacc9d0b_a261_4a3e_854e_d487e5599437.slice/crio-c0d04e297ecdb2059412cb4fb8a81c388df4e5174f3a077c4840ae3f6c76f6c3.scope WatchSource:0}: Error finding container c0d04e297ecdb2059412cb4fb8a81c388df4e5174f3a077c4840ae3f6c76f6c3: Status 404 returned error can't find the container with id c0d04e297ecdb2059412cb4fb8a81c388df4e5174f3a077c4840ae3f6c76f6c3 Jan 17 14:42:21 edgenius microshift[2779]: kubelet I0117 14:42:21.124662 2779 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4af511ec-802a-47ed-8715-fb0f87f76549 path="/var/lib/kubelet/pods/4af511ec-802a-47ed-8715-fb0f87f76549/volumes" Jan 17 14:42:22 edgenius microshift[2779]: kube-apiserver W0117 14:42:22.361994 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:42:22 edgenius microshift[2779]: kube-apiserver E0117 14:42:22.362526 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:42:29 edgenius microshift[2779]: kube-apiserver W0117 14:42:29.810199 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:42:29 edgenius microshift[2779]: kube-apiserver E0117 14:42:29.810423 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:42:35 edgenius microshift[2779]: kube-controller-manager E0117 14:42:35.095009 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:42:35 edgenius microshift[2779]: kube-controller-manager I0117 14:42:35.095228 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:42:48 edgenius microshift[2779]: kubelet E0117 14:42:48.974855 2779 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5f6c6c1bcd1fcc55b9f55d69d4cc1ebd3dc1ffc4521c345030c2567e13ea78\": container with ID starting with 9b5f6c6c1bcd1fcc55b9f55d69d4cc1ebd3dc1ffc4521c345030c2567e13ea78 not found: ID does not exist" containerID="9b5f6c6c1bcd1fcc55b9f55d69d4cc1ebd3dc1ffc4521c345030c2567e13ea78" Jan 17 14:42:48 edgenius microshift[2779]: kubelet I0117 14:42:48.974953 2779 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="9b5f6c6c1bcd1fcc55b9f55d69d4cc1ebd3dc1ffc4521c345030c2567e13ea78" err="rpc error: code = NotFound desc = could not find container \"9b5f6c6c1bcd1fcc55b9f55d69d4cc1ebd3dc1ffc4521c345030c2567e13ea78\": container with ID starting with 9b5f6c6c1bcd1fcc55b9f55d69d4cc1ebd3dc1ffc4521c345030c2567e13ea78 not found: ID does not exist" Jan 17 14:42:50 edgenius microshift[2779]: kube-controller-manager E0117 14:42:50.094888 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:42:50 edgenius microshift[2779]: kube-controller-manager I0117 14:42:50.095079 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:43:05 edgenius microshift[2779]: kube-controller-manager E0117 14:43:05.094339 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:43:05 edgenius microshift[2779]: kube-controller-manager I0117 14:43:05.094538 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:43:19 edgenius microshift[2779]: kube-apiserver W0117 14:43:19.306939 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:43:19 edgenius microshift[2779]: kube-apiserver E0117 14:43:19.307181 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:43:20 edgenius microshift[2779]: kube-controller-manager E0117 14:43:20.095905 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:43:20 edgenius microshift[2779]: kube-controller-manager I0117 14:43:20.096025 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:43:24 edgenius microshift[2779]: kube-apiserver W0117 14:43:24.520231 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:43:24 edgenius microshift[2779]: kube-apiserver E0117 14:43:24.520300 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:43:24 edgenius microshift[2779]: kube-scheduler E0117 14:43:24.808767 2779 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"edgefileprocessor-6b88cf4fb5-hnt7t.173b1f7c332160e7", GenerateName:"", Namespace:"edgenius", SelfLink:"", UID:"84f34884-6faf-4a15-bd45-ac9799530165", ResourceVersion:"691328", Generation:0, CreationTimestamp:time.Date(2023, time.January, 17, 14, 38, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"microshift", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:time.Date(2023, time.January, 17, 14, 38, 24, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0092c2648), Subresource:""}}}, EventTime:time.Date(2023, time.January, 17, 14, 38, 24, 792485503, time.Local), Series:(*v1.EventSeries)(0xc00ad7e540), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-edgenius", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"edgenius", Name:"edgefileprocessor-6b88cf4fb5-hnt7t", UID:"a7df214d-82fe-4be8-b9f3-ea7c82a50ebb", APIVersion:"v1", ResourceVersion:"677113", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeprecatedLastTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeprecatedCount:0}': 'Event "edgefileprocessor-6b88cf4fb5-hnt7t.173b1f7c332160e7" is invalid: series.count: Invalid value: "": should be at least 2' (will not retry!) Jan 17 14:43:35 edgenius microshift[2779]: kube-controller-manager E0117 14:43:35.097387 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:43:35 edgenius microshift[2779]: kube-controller-manager I0117 14:43:35.097495 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:43:50 edgenius microshift[2779]: kube-controller-manager E0117 14:43:50.098325 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:43:50 edgenius microshift[2779]: kube-controller-manager I0117 14:43:50.098466 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:43:58 edgenius microshift[2779]: kube-apiserver W0117 14:43:58.479409 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:43:58 edgenius microshift[2779]: kube-apiserver E0117 14:43:58.479645 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:44:05 edgenius microshift[2779]: kube-controller-manager E0117 14:44:05.097645 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:44:05 edgenius microshift[2779]: kube-controller-manager I0117 14:44:05.098014 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:44:15 edgenius microshift[2779]: kube-apiserver W0117 14:44:15.986878 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:44:15 edgenius microshift[2779]: kube-apiserver E0117 14:44:15.987310 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:44:20 edgenius microshift[2779]: kube-controller-manager E0117 14:44:20.099283 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:44:20 edgenius microshift[2779]: kube-controller-manager I0117 14:44:20.099740 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:44:35 edgenius microshift[2779]: kube-controller-manager E0117 14:44:35.099519 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:44:35 edgenius microshift[2779]: kube-controller-manager I0117 14:44:35.099932 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:44:48 edgenius microshift[2779]: kube-apiserver W0117 14:44:48.273681 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:44:48 edgenius microshift[2779]: kube-apiserver E0117 14:44:48.273746 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:44:50 edgenius microshift[2779]: kube-controller-manager E0117 14:44:50.099859 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:44:50 edgenius microshift[2779]: kube-controller-manager I0117 14:44:50.100717 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:45:05 edgenius microshift[2779]: kube-controller-manager E0117 14:45:05.101796 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:45:05 edgenius microshift[2779]: kube-controller-manager I0117 14:45:05.101917 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:45:10 edgenius microshift[2779]: kube-apiserver W0117 14:45:10.970560 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:45:10 edgenius microshift[2779]: kube-apiserver E0117 14:45:10.970654 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:45:20 edgenius microshift[2779]: kube-controller-manager E0117 14:45:20.101669 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:45:20 edgenius microshift[2779]: kube-controller-manager I0117 14:45:20.101746 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:45:20 edgenius microshift[2779]: kube-apiserver W0117 14:45:20.579077 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:45:20 edgenius microshift[2779]: kube-apiserver E0117 14:45:20.579142 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:45:34 edgenius microshift[2779]: {"level":"warn","ts":"2023-01-17T14:45:34.018Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.436404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/masterleases/\" range_end:\"/kubernetes.io/masterleases0\" ","response":"range_response_count:1 size:144"} Jan 17 14:45:34 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:45:34.020Z","caller":"traceutil/trace.go:171","msg":"trace[341582049] range","detail":"{range_begin:/kubernetes.io/masterleases/; range_end:/kubernetes.io/masterleases0; response_count:1; response_revision:692665; }","duration":"103.841819ms","start":"2023-01-17T14:45:33.916Z","end":"2023-01-17T14:45:34.020Z","steps":["trace[341582049] 'range keys from in-memory index tree' (duration: 102.1247ms)"],"step_count":1} Jan 17 14:45:35 edgenius microshift[2779]: kube-controller-manager E0117 14:45:35.103679 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:45:35 edgenius microshift[2779]: kube-controller-manager I0117 14:45:35.103802 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:45:42 edgenius microshift[2779]: kube-apiserver W0117 14:45:42.786324 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:45:42 edgenius microshift[2779]: kube-apiserver E0117 14:45:42.786586 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:45:50 edgenius microshift[2779]: kube-controller-manager E0117 14:45:50.102719 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:45:50 edgenius microshift[2779]: kube-controller-manager I0117 14:45:50.102833 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:46:03 edgenius microshift[2779]: kube-apiserver W0117 14:46:03.754021 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:46:03 edgenius microshift[2779]: kube-apiserver E0117 14:46:03.754291 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:46:05 edgenius microshift[2779]: kube-controller-manager E0117 14:46:05.102959 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:46:05 edgenius microshift[2779]: kube-controller-manager I0117 14:46:05.103126 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:46:20 edgenius microshift[2779]: kube-controller-manager E0117 14:46:20.103565 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:46:20 edgenius microshift[2779]: kube-controller-manager I0117 14:46:20.103698 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:46:35 edgenius microshift[2779]: kube-controller-manager E0117 14:46:35.103410 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:46:35 edgenius microshift[2779]: kube-controller-manager I0117 14:46:35.103525 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:46:37 edgenius microshift[2779]: kube-apiserver W0117 14:46:37.304018 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:46:37 edgenius microshift[2779]: kube-apiserver E0117 14:46:37.304057 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:46:38 edgenius microshift[2779]: kube-apiserver W0117 14:46:38.148660 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:46:38 edgenius microshift[2779]: kube-apiserver E0117 14:46:38.148704 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:46:50 edgenius microshift[2779]: kube-controller-manager E0117 14:46:50.104310 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:46:50 edgenius microshift[2779]: kube-controller-manager I0117 14:46:50.104430 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:47:05 edgenius microshift[2779]: kube-controller-manager E0117 14:47:05.105052 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:47:05 edgenius microshift[2779]: kube-controller-manager I0117 14:47:05.105132 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:47:09 edgenius microshift[2779]: kube-apiserver W0117 14:47:09.962516 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:47:09 edgenius microshift[2779]: kube-apiserver E0117 14:47:09.962577 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:47:20 edgenius microshift[2779]: kube-controller-manager E0117 14:47:20.105434 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:47:20 edgenius microshift[2779]: kube-controller-manager I0117 14:47:20.105609 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:47:34 edgenius microshift[2779]: kube-apiserver W0117 14:47:34.803603 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:47:34 edgenius microshift[2779]: kube-apiserver E0117 14:47:34.803797 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:47:35 edgenius microshift[2779]: kube-controller-manager E0117 14:47:35.106573 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:47:35 edgenius microshift[2779]: kube-controller-manager I0117 14:47:35.106720 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:47:50 edgenius microshift[2779]: kube-controller-manager E0117 14:47:50.107001 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:47:50 edgenius microshift[2779]: kube-controller-manager I0117 14:47:50.107171 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:48:01 edgenius microshift[2779]: kube-apiserver W0117 14:48:01.488551 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:48:01 edgenius microshift[2779]: kube-apiserver E0117 14:48:01.488603 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:48:05 edgenius microshift[2779]: kube-controller-manager E0117 14:48:05.107564 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:48:05 edgenius microshift[2779]: kube-controller-manager I0117 14:48:05.107725 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:48:20 edgenius microshift[2779]: kube-controller-manager E0117 14:48:20.108637 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:48:20 edgenius microshift[2779]: kube-controller-manager I0117 14:48:20.108765 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:48:21 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:48:21.195Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":692456} Jan 17 14:48:21 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:48:21.248Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":692456,"took":"52.893388ms"} Jan 17 14:48:23 edgenius microshift[2779]: kube-apiserver W0117 14:48:23.632160 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:48:23 edgenius microshift[2779]: kube-apiserver E0117 14:48:23.632199 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:48:35 edgenius microshift[2779]: kube-controller-manager E0117 14:48:35.109746 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:48:35 edgenius microshift[2779]: kube-controller-manager I0117 14:48:35.109848 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:48:45 edgenius microshift[2779]: kube-apiserver W0117 14:48:45.361754 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:48:45 edgenius microshift[2779]: kube-apiserver E0117 14:48:45.361799 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:48:50 edgenius microshift[2779]: kube-controller-manager E0117 14:48:50.110715 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:48:50 edgenius microshift[2779]: kube-controller-manager I0117 14:48:50.110938 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:49:04 edgenius microshift[2779]: kube-apiserver W0117 14:49:04.534312 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:49:04 edgenius microshift[2779]: kube-apiserver E0117 14:49:04.534438 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:49:05 edgenius microshift[2779]: kube-controller-manager E0117 14:49:05.112113 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:49:05 edgenius microshift[2779]: kube-controller-manager I0117 14:49:05.112251 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:49:20 edgenius microshift[2779]: kube-controller-manager E0117 14:49:20.112062 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:49:20 edgenius microshift[2779]: kube-controller-manager I0117 14:49:20.112263 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:49:35 edgenius microshift[2779]: kube-controller-manager E0117 14:49:35.113018 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:49:35 edgenius microshift[2779]: kube-controller-manager I0117 14:49:35.113223 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:49:43 edgenius microshift[2779]: kube-apiserver W0117 14:49:43.828377 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:49:43 edgenius microshift[2779]: kube-apiserver E0117 14:49:43.828579 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:49:45 edgenius microshift[2779]: kube-apiserver W0117 14:49:45.002068 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:49:45 edgenius microshift[2779]: kube-apiserver E0117 14:49:45.002208 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:49:50 edgenius microshift[2779]: kube-controller-manager E0117 14:49:50.113151 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:49:50 edgenius microshift[2779]: kube-controller-manager I0117 14:49:50.113295 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:50:05 edgenius microshift[2779]: kube-controller-manager E0117 14:50:05.114334 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:50:05 edgenius microshift[2779]: kube-controller-manager I0117 14:50:05.114485 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:50:20 edgenius microshift[2779]: kube-controller-manager E0117 14:50:20.115270 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:50:20 edgenius microshift[2779]: kube-controller-manager I0117 14:50:20.115451 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:50:26 edgenius microshift[2779]: kube-apiserver W0117 14:50:26.156025 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:50:26 edgenius microshift[2779]: kube-apiserver E0117 14:50:26.156130 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:50:35 edgenius microshift[2779]: kube-controller-manager E0117 14:50:35.116007 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:50:35 edgenius microshift[2779]: kube-controller-manager I0117 14:50:35.116079 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:50:41 edgenius microshift[2779]: kube-apiserver W0117 14:50:41.842578 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:50:41 edgenius microshift[2779]: kube-apiserver E0117 14:50:41.842655 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:50:50 edgenius microshift[2779]: kube-controller-manager E0117 14:50:50.116459 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:50:50 edgenius microshift[2779]: kube-controller-manager I0117 14:50:50.116669 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:51:05 edgenius microshift[2779]: kube-controller-manager E0117 14:51:05.117423 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:51:05 edgenius microshift[2779]: kube-controller-manager I0117 14:51:05.117548 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:51:14 edgenius microshift[2779]: kube-apiserver W0117 14:51:14.872201 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:51:14 edgenius microshift[2779]: kube-apiserver E0117 14:51:14.872410 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:51:20 edgenius microshift[2779]: kube-controller-manager E0117 14:51:20.117746 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:51:20 edgenius microshift[2779]: kube-controller-manager I0117 14:51:20.117833 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:51:26 edgenius microshift[2779]: kube-apiserver W0117 14:51:26.176753 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:51:26 edgenius microshift[2779]: kube-apiserver E0117 14:51:26.176825 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:51:35 edgenius microshift[2779]: kube-controller-manager E0117 14:51:35.117717 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:51:35 edgenius microshift[2779]: kube-controller-manager I0117 14:51:35.117947 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:51:49 edgenius microshift[2779]: kube-apiserver W0117 14:51:49.626391 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:51:49 edgenius microshift[2779]: kube-apiserver E0117 14:51:49.626604 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:51:50 edgenius microshift[2779]: kube-controller-manager E0117 14:51:50.118016 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:51:50 edgenius microshift[2779]: kube-controller-manager I0117 14:51:50.118166 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:51:50 edgenius microshift[2779]: kube-apiserver I0117 14:51:50.717081 2779 controller.go:616] quota admission added evaluator for: edgemetadata.edge.edgenius.abb Jan 17 14:51:50 edgenius microshift[2779]: kube-apiserver I0117 14:51:50.729780 2779 controller.go:616] quota admission added evaluator for: edgemodules.edge.edgenius.abb Jan 17 14:51:50 edgenius microshift[2779]: kube-apiserver I0117 14:51:50.814972 2779 controller.go:616] quota admission added evaluator for: routes.route.openshift.io Jan 17 14:52:05 edgenius microshift[2779]: kube-controller-manager E0117 14:52:05.119134 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:52:05 edgenius microshift[2779]: kube-controller-manager I0117 14:52:05.119295 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:52:14 edgenius microshift[2779]: kube-apiserver W0117 14:52:14.285199 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:52:14 edgenius microshift[2779]: kube-apiserver E0117 14:52:14.285565 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:52:20 edgenius microshift[2779]: kube-controller-manager E0117 14:52:20.119402 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:52:20 edgenius microshift[2779]: kube-controller-manager I0117 14:52:20.119675 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:52:22 edgenius microshift[2779]: kube-apiserver W0117 14:52:22.788551 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:52:22 edgenius microshift[2779]: kube-apiserver E0117 14:52:22.788814 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:52:35 edgenius microshift[2779]: kube-controller-manager E0117 14:52:35.119227 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:52:35 edgenius microshift[2779]: kube-controller-manager I0117 14:52:35.119419 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:52:44 edgenius microshift[2779]: kube-apiserver W0117 14:52:44.441847 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:52:44 edgenius microshift[2779]: kube-apiserver E0117 14:52:44.441943 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:52:50 edgenius microshift[2779]: kube-controller-manager E0117 14:52:50.120825 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:52:50 edgenius microshift[2779]: kube-controller-manager I0117 14:52:50.121145 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:53:05 edgenius microshift[2779]: kube-controller-manager E0117 14:53:05.121371 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:53:05 edgenius microshift[2779]: kube-controller-manager I0117 14:53:05.121949 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:53:16 edgenius microshift[2779]: kube-apiserver W0117 14:53:16.656983 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:53:16 edgenius microshift[2779]: kube-apiserver E0117 14:53:16.657193 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:53:20 edgenius microshift[2779]: kube-controller-manager E0117 14:53:20.121166 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:53:20 edgenius microshift[2779]: kube-controller-manager I0117 14:53:20.121330 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:53:21 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:53:21.203Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":692920} Jan 17 14:53:21 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:53:21.218Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":692920,"took":"14.747785ms"} Jan 17 14:53:35 edgenius microshift[2779]: kube-controller-manager E0117 14:53:35.121271 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:53:35 edgenius microshift[2779]: kube-controller-manager I0117 14:53:35.121393 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:53:40 edgenius microshift[2779]: kube-apiserver W0117 14:53:40.412472 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:53:40 edgenius microshift[2779]: kube-apiserver E0117 14:53:40.412516 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:53:50 edgenius microshift[2779]: kube-controller-manager E0117 14:53:50.122201 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:53:50 edgenius microshift[2779]: kube-controller-manager I0117 14:53:50.122331 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:54:05 edgenius microshift[2779]: kube-controller-manager E0117 14:54:05.122386 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:54:05 edgenius microshift[2779]: kube-controller-manager I0117 14:54:05.122549 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:54:10 edgenius microshift[2779]: kube-apiserver W0117 14:54:10.191294 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:54:10 edgenius microshift[2779]: kube-apiserver E0117 14:54:10.192207 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:54:11 edgenius microshift[2779]: {"level":"warn","ts":"2023-01-17T14:54:11.828Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.00364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/leases/openshift-ovn-kubernetes/ovn-kubernetes-master\" ","response":"range_response_count:1 size:482"} Jan 17 14:54:11 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:54:11.828Z","caller":"traceutil/trace.go:171","msg":"trace[2053913552] range","detail":"{range_begin:/kubernetes.io/leases/openshift-ovn-kubernetes/ovn-kubernetes-master; range_end:; response_count:1; response_revision:693464; }","duration":"113.320743ms","start":"2023-01-17T14:54:11.714Z","end":"2023-01-17T14:54:11.828Z","steps":["trace[2053913552] 'agreement among raft nodes before linearized reading' (duration: 55.802061ms)","trace[2053913552] 'range keys from in-memory index tree' (duration: 57.141877ms)"],"step_count":2} Jan 17 14:54:20 edgenius microshift[2779]: kube-controller-manager E0117 14:54:20.123327 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:54:20 edgenius microshift[2779]: kube-controller-manager I0117 14:54:20.123485 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:54:22 edgenius microshift[2779]: kube-apiserver W0117 14:54:22.174697 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:54:22 edgenius microshift[2779]: kube-apiserver E0117 14:54:22.181357 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:54:24 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:54:24.001Z","caller":"traceutil/trace.go:171","msg":"trace[261269808] transaction","detail":"{read_only:false; response_revision:693484; number_of_response:1; }","duration":"150.577285ms","start":"2023-01-17T14:54:23.851Z","end":"2023-01-17T14:54:24.001Z","steps":["trace[261269808] 'process raft request' (duration: 76.336305ms)","trace[261269808] 'compare' (duration: 74.043578ms)"],"step_count":2} Jan 17 14:54:35 edgenius microshift[2779]: kube-controller-manager E0117 14:54:35.123461 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:54:35 edgenius microshift[2779]: kube-controller-manager I0117 14:54:35.123566 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:54:42 edgenius microshift[2779]: kube-apiserver W0117 14:54:42.463042 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:54:42 edgenius microshift[2779]: kube-apiserver E0117 14:54:42.463358 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:54:50 edgenius microshift[2779]: kube-controller-manager E0117 14:54:50.124829 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:54:50 edgenius microshift[2779]: kube-controller-manager I0117 14:54:50.125045 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:54:56 edgenius microshift[2779]: kube-apiserver W0117 14:54:56.192265 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:54:56 edgenius microshift[2779]: kube-apiserver E0117 14:54:56.192307 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:55:05 edgenius microshift[2779]: kube-controller-manager E0117 14:55:05.126004 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:55:05 edgenius microshift[2779]: kube-controller-manager I0117 14:55:05.126187 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:55:20 edgenius microshift[2779]: kube-controller-manager E0117 14:55:20.126734 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:55:20 edgenius microshift[2779]: kube-controller-manager I0117 14:55:20.126879 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:55:35 edgenius microshift[2779]: kube-controller-manager E0117 14:55:35.127155 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:55:35 edgenius microshift[2779]: kube-controller-manager I0117 14:55:35.127294 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:55:41 edgenius microshift[2779]: kube-apiserver W0117 14:55:41.455067 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:55:41 edgenius microshift[2779]: kube-apiserver E0117 14:55:41.455152 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:55:50 edgenius microshift[2779]: kube-controller-manager E0117 14:55:50.128298 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:55:50 edgenius microshift[2779]: kube-controller-manager I0117 14:55:50.128468 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:55:53 edgenius microshift[2779]: kube-apiserver W0117 14:55:53.963809 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:55:53 edgenius microshift[2779]: kube-apiserver E0117 14:55:53.964006 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:56:05 edgenius microshift[2779]: kube-controller-manager E0117 14:56:05.129202 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:56:05 edgenius microshift[2779]: kube-controller-manager I0117 14:56:05.129644 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:56:20 edgenius microshift[2779]: kube-controller-manager E0117 14:56:20.129471 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:56:20 edgenius microshift[2779]: kube-controller-manager I0117 14:56:20.129740 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:56:35 edgenius microshift[2779]: kube-controller-manager E0117 14:56:35.131354 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:56:35 edgenius microshift[2779]: kube-controller-manager I0117 14:56:35.131583 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:56:39 edgenius microshift[2779]: kube-apiserver W0117 14:56:39.417566 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:56:39 edgenius microshift[2779]: kube-apiserver E0117 14:56:39.417609 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:56:48 edgenius microshift[2779]: kube-apiserver W0117 14:56:48.693805 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:56:48 edgenius microshift[2779]: kube-apiserver E0117 14:56:48.693997 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:56:50 edgenius microshift[2779]: kube-controller-manager E0117 14:56:50.132335 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:56:50 edgenius microshift[2779]: kube-controller-manager I0117 14:56:50.132508 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:57:05 edgenius microshift[2779]: kube-controller-manager E0117 14:57:05.131926 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:57:05 edgenius microshift[2779]: kube-controller-manager I0117 14:57:05.132009 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:57:20 edgenius microshift[2779]: kube-controller-manager E0117 14:57:20.132406 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:57:20 edgenius microshift[2779]: kube-controller-manager I0117 14:57:20.132540 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:57:35 edgenius microshift[2779]: kube-controller-manager E0117 14:57:35.132989 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:57:35 edgenius microshift[2779]: kube-controller-manager I0117 14:57:35.133127 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:57:37 edgenius microshift[2779]: kube-apiserver W0117 14:57:37.086206 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:57:37 edgenius microshift[2779]: kube-apiserver E0117 14:57:37.086289 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:57:44 edgenius microshift[2779]: kube-apiserver W0117 14:57:44.301037 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:57:44 edgenius microshift[2779]: kube-apiserver E0117 14:57:44.301078 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:57:50 edgenius microshift[2779]: kube-controller-manager E0117 14:57:50.134063 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:57:50 edgenius microshift[2779]: kube-controller-manager I0117 14:57:50.134251 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:58:05 edgenius microshift[2779]: kube-controller-manager E0117 14:58:05.134809 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:58:05 edgenius microshift[2779]: kube-controller-manager I0117 14:58:05.134943 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:58:19 edgenius microshift[2779]: kube-apiserver W0117 14:58:19.681524 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:58:19 edgenius microshift[2779]: kube-apiserver E0117 14:58:19.681718 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:58:20 edgenius microshift[2779]: kube-controller-manager E0117 14:58:20.136185 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:58:20 edgenius microshift[2779]: kube-controller-manager I0117 14:58:20.136349 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:58:21 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:58:21.223Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":693386} Jan 17 14:58:21 edgenius microshift[2779]: {"level":"info","ts":"2023-01-17T14:58:21.240Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":693386,"took":"16.150703ms"} Jan 17 14:58:34 edgenius microshift[2779]: kube-apiserver W0117 14:58:34.117994 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:58:34 edgenius microshift[2779]: kube-apiserver E0117 14:58:34.118044 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:58:35 edgenius microshift[2779]: kube-controller-manager E0117 14:58:35.136826 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:58:35 edgenius microshift[2779]: kube-controller-manager I0117 14:58:35.136928 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:58:50 edgenius microshift[2779]: kube-controller-manager E0117 14:58:50.137367 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:58:50 edgenius microshift[2779]: kube-controller-manager I0117 14:58:50.137672 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:58:57 edgenius microshift[2779]: kube-apiserver W0117 14:58:57.415452 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:58:57 edgenius microshift[2779]: kube-apiserver E0117 14:58:57.415501 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:59:05 edgenius microshift[2779]: kube-controller-manager E0117 14:59:05.138193 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:59:05 edgenius microshift[2779]: kube-controller-manager I0117 14:59:05.138389 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:59:20 edgenius microshift[2779]: kube-controller-manager E0117 14:59:20.139082 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:59:20 edgenius microshift[2779]: kube-controller-manager I0117 14:59:20.139296 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:59:25 edgenius microshift[2779]: kube-apiserver W0117 14:59:25.412417 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:59:25 edgenius microshift[2779]: kube-apiserver E0117 14:59:25.412456 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:59:35 edgenius microshift[2779]: kube-controller-manager E0117 14:59:35.140258 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:59:35 edgenius microshift[2779]: kube-controller-manager I0117 14:59:35.140493 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:59:43 edgenius microshift[2779]: kube-apiserver W0117 14:59:43.303064 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:59:43 edgenius microshift[2779]: kube-apiserver E0117 14:59:43.303255 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 14:59:50 edgenius microshift[2779]: kube-controller-manager E0117 14:59:50.140198 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 14:59:50 edgenius microshift[2779]: kube-controller-manager I0117 14:59:50.140324 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 14:59:56 edgenius microshift[2779]: kube-apiserver W0117 14:59:56.546502 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 14:59:56 edgenius microshift[2779]: kube-apiserver E0117 14:59:56.546561 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 15:00:05 edgenius microshift[2779]: kube-controller-manager E0117 15:00:05.144827 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 15:00:05 edgenius microshift[2779]: kube-controller-manager I0117 15:00:05.145163 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 15:00:20 edgenius microshift[2779]: kube-controller-manager E0117 15:00:20.144402 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 15:00:20 edgenius microshift[2779]: kube-controller-manager I0117 15:00:20.144788 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 15:00:35 edgenius microshift[2779]: kube-controller-manager E0117 15:00:35.144196 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 15:00:35 edgenius microshift[2779]: kube-controller-manager I0117 15:00:35.144802 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 15:00:42 edgenius microshift[2779]: kube-apiserver W0117 15:00:42.640499 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 15:00:42 edgenius microshift[2779]: kube-apiserver E0117 15:00:42.640550 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 15:00:50 edgenius microshift[2779]: kube-controller-manager E0117 15:00:50.144788 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 15:00:50 edgenius microshift[2779]: kube-controller-manager I0117 15:00:50.145062 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 15:00:51 edgenius microshift[2779]: kube-apiserver W0117 15:00:51.188767 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 15:00:51 edgenius microshift[2779]: kube-apiserver E0117 15:00:51.188837 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 15:01:05 edgenius microshift[2779]: kube-controller-manager E0117 15:01:05.145517 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 15:01:05 edgenius microshift[2779]: kube-controller-manager I0117 15:01:05.145667 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 15:01:20 edgenius microshift[2779]: kube-controller-manager E0117 15:01:20.146687 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 15:01:20 edgenius microshift[2779]: kube-controller-manager I0117 15:01:20.146884 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 15:01:23 edgenius microshift[2779]: kube-apiserver W0117 15:01:23.494922 2779 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 15:01:23 edgenius microshift[2779]: kube-apiserver E0117 15:01:23.495129 2779 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Jan 17 15:01:35 edgenius microshift[2779]: kube-controller-manager E0117 15:01:35.147136 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 15:01:35 edgenius microshift[2779]: kube-controller-manager I0117 15:01:35.147216 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found" Jan 17 15:01:38 edgenius microshift[2779]: kube-apiserver W0117 15:01:38.810879 2779 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 15:01:38 edgenius microshift[2779]: kube-apiserver E0117 15:01:38.810934 2779 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Jan 17 15:01:50 edgenius microshift[2779]: kube-controller-manager E0117 15:01:50.147930 2779 pv_controller.go:1541] error finding provisioning plugin for claim edgenius/pvc-volume-edgefileprocessor: storageclass.storage.k8s.io "odf-lvm-vgedgenius" not found Jan 17 15:01:50 edgenius microshift[2779]: kube-controller-manager I0117 15:01:50.148071 2779 event.go:294] "Event occurred" object="edgenius/pvc-volume-edgefileprocessor" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"odf-lvm-vgedgenius\" not found"