$ oc project openshift-multus Now using project "openshift-multus" on server "https://api.build09.ci.devcluster.openshift.com:6443". $ oc get pods NAME READY STATUS RESTARTS AGE [snip...] multus-hjcsp 0/1 CrashLoopBackOff 34 (4m12s ago) 3h multus-hpjkt 1/1 Running 30 (5m8s ago) 151m multus-hqh54 0/1 CrashLoopBackOff 33 (4m30s ago) 174m multus-jn865 0/1 CrashLoopBackOff 34 (4m18s ago) 3h multus-k6hzn 0/1 CrashLoopBackOff 28 (3m51s ago) 144m multus-khw9c 1/1 Running 29 (5m48s ago) 145m [snip...] $ oc describe pod multus-cl5vh [snip...] State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Message: 2025-02-07T20:39:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d0190426-bb3d-4e8b-a0b4-5bd736815bd2 2025-02-07T20:39:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d0190426-bb3d-4e8b-a0b4-5bd736815bd2 to /host/opt/cni/bin/ 2025-02-07T20:39:07Z [verbose] multus-daemon started 2025-02-07T20:39:07Z [verbose] Readiness Indicator file check 2025-02-07T20:39:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition [snip...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 179m default-scheduler Successfully assigned openshift-multus/multus-cl5vh to ip-10-0-164-216.us-east-2.compute.internal Normal ArchAwarePredicateSet 179m multiarch-tuning-operator Set the nodeAffinity for the architecture to {amd64, arm64, ppc64le, s390x} Normal ArchAwareSchedGateRemovalSuccess 179m multiarch-tuning-operator Successfully removed the multiarch.openshift.io/scheduling-gate scheduling gate Normal Pulling 179m kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e03ddedbdd84320348f9da5e153b340030f4e7f3a2243a9f2c1e118a1de54b8c" Normal Pulled 179m kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e03ddedbdd84320348f9da5e153b340030f4e7f3a2243a9f2c1e118a1de54b8c" in 11.493s (11.493s including waiting). Image size: 1232841405 bytes. Normal Created 175m (x5 over 179m) kubelet Created container kube-multus Normal Started 175m (x5 over 179m) kubelet Started container kube-multus Normal Pulled 104m (x17 over 178m) kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e03ddedbdd84320348f9da5e153b340030f4e7f3a2243a9f2c1e118a1de54b8c" already present on machine Warning BackOff 4m24s (x684 over 178m) kubelet Back-off restarting failed container kube-multus in pod multus-cl5vh_openshift-multus(34ff8e60-943a-48a5-a6a1-537491190e16) $ oc logs multus-cl5vh 2025-02-07T20:39:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d0190426-bb3d-4e8b-a0b4-5bd736815bd2 2025-02-07T20:39:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d0190426-bb3d-4e8b-a0b4-5bd736815bd2 to /host/opt/cni/bin/ 2025-02-07T20:39:07Z [verbose] multus-daemon started 2025-02-07T20:39:07Z [verbose] Readiness Indicator file check 2025-02-07T20:39:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition $ oc get namespaces | grep -i ovn openshift-ovn-kubernetes Active 500d $ oc project openshift-ovn-kubernetes Now using project "openshift-ovn-kubernetes" on server "https://api.build09.ci.devcluster.openshift.com:6443". $ oc get pods NAME READY STATUS RESTARTS AGE [snip...] ovnkube-node-dzl6z 7/8 CrashLoopBackOff 37 (11s ago) 166m ovnkube-node-fdhg2 7/8 CrashLoopBackOff 41 (81s ago) 3h2m ovnkube-node-ff5cs 7/8 CrashLoopBackOff 39 (23s ago) 170m ovnkube-node-ffhhv 7/8 CrashLoopBackOff 37 (25s ago) 161m ovnkube-node-fhzvk 7/8 CrashLoopBackOff 36 (4m11s ago) 159m ovnkube-node-fl2rh 7/8 CrashLoopBackOff 36 (117s ago) 157m ovnkube-node-fldr4 7/8 CrashLoopBackOff 40 (5m6s ago) 3h1m ovnkube-node-g79gb 7/8 CrashLoopBackOff 35 (2m41s ago) 152m ovnkube-node-gvx7p 7/8 CrashLoopBackOff 32 (5m1s ago) 145m ovnkube-node-hxkp6 7/8 CrashLoopBackOff 40 (5m ago) 3h1m ovnkube-node-jjkkf 8/8 Running 9 (9d ago) 9d ovnkube-node-jnp6j 7/8 CrashLoopBackOff 38 (2m2s ago) 167m ovnkube-node-jq6wt 7/8 CrashLoopBackOff 39 (4m3s ago) 175m ovnkube-node-l5dp8 8/8 Running 1 (17h ago) 17h ovnkube-node-l7qw2 7/8 CrashLoopBackOff 34 (2m33s ago) 147m ovnkube-node-lfmf5 7/8 CrashLoopBackOff 39 (64s ago) 176m ovnkube-node-lg5k9 7/8 CrashLoopBackOff 33 (17s ago) 145m ovnkube-node-m62ph 7/8 CrashLoopBackOff 39 (5m8s ago) 3h1m ovnkube-node-mpc6x 7/8 CrashLoopBackOff 32 (2m23s ago) 137m ovnkube-node-mrt54 7/8 CrashLoopBackOff 35 (2m44s ago) 158m ovnkube-node-mw7xl 8/8 Running 9 (9d ago) 9d ovnkube-node-p2fg2 7/8 CrashLoopBackOff 41 (16s ago) 3h1m ovnkube-node-p8ldn 8/8 Running 9 (9d ago) 9d ovnkube-node-pqcvx 7/8 CrashLoopBackOff 31 (2m23s ago) 137m ovnkube-node-prmts 7/8 CrashLoopBackOff 38 (3m21s ago) 175m ovnkube-node-q76dz 7/8 CrashLoopBackOff 36 (4m26s ago) 159m ovnkube-node-q78ct 7/8 CrashLoopBackOff 39 (4m14s ago) 3h ovnkube-node-qgm7t 8/8 Running 9 (9d ago) 9d ovnkube-node-qrx8d 7/8 CrashLoopBackOff 44 (3m18s ago) 3h24m ovnkube-node-qxhct 7/8 CrashLoopBackOff 35 (2m21s ago) 157m ovnkube-node-r4b4v 7/8 CrashLoopBackOff 38 (2m33s ago) 167m ovnkube-node-rhcqj 7/8 CrashLoopBackOff 36 (3m28s ago) 157m [snip...] $ oc logs ovnkube-node-r4b4v [snip...] $ oc logs ovnkube-node-r4b4v -c ovnkube-controller + . /ovnkube-lib/ovnkube-lib.sh [snip...] I0207 20:42:36.350777 32312 ovs.go:159] Exec(9): /usr/bin/ovs-vsctl --timeout=15 --no-heading --data=bare --format=csv --columns name list interface I0207 20:42:36.353126 32312 ovs.go:162] Exec(8): stdout: "" I0207 20:42:36.353137 32312 ovs.go:163] Exec(8): stderr: "" I0207 20:42:36.353143 32312 node_controller_manager.go:335] CheckForStaleOVSInternalPorts took 4.794292ms I0207 20:42:36.353151 32312 ovs.go:159] Exec(10): /usr/bin/ovs-vsctl --timeout=15 --columns=name,external_ids --data=bare --no-headings --format=csv find Interface external_ids:sandbox!="" external_ids:vf-netdev-name!="" I0207 20:42:36.355011 32312 ovs.go:162] Exec(9): stdout: "ens5\nbr-int\nbr-ex\n" I0207 20:42:36.355028 32312 ovs.go:163] Exec(9): stderr: "" I0207 20:42:36.357263 32312 ovs.go:162] Exec(10): stdout: "" I0207 20:42:36.357275 32312 ovs.go:163] Exec(10): stderr: "" I0207 20:42:36.360204 32312 ovs.go:159] Exec(11): /usr/bin/ovn-sbctl --timeout=15 --no-leader-only get SB_Global . options:name I0207 20:42:36.364000 32312 ovs.go:162] Exec(11): stdout: "ip-10-0-200-184.us-east-2.compute.internal\n" I0207 20:42:36.364013 32312 ovs.go:163] Exec(11): stderr: "" I0207 20:42:36.364034 32312 ovs.go:159] Exec(12): /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-encap-type=geneve external_ids:ovn-encap-ip=10.0.200.184 external_ids:ovn-remote-probe-interval=180000 external_ids:ovn-openflow-probe-interval=180 other_config:bundle-idle-timeout=180 external_ids:ovn-is-interconn=true external_ids:ovn-monitor-all=true external_ids:ovn-ofctrl-wait-before-clear=0 external_ids:ovn-enable-lflow-cache=true external_ids:ovn-set-local-ip="true" external_ids:ovn-memlimit-lflow-cache-kb=1048576 external_ids:hostname="ip-10-0-200-184.us-east-2.compute.internal" I0207 20:42:36.365173 32312 controller_manager.go:368] Waiting up to 5m0s for a node to have "ip-10-0-200-184.us-east-2.compute.internal" zone I0207 20:42:36.368231 32312 ovs.go:162] Exec(12): stdout: "" I0207 20:42:36.368243 32312 ovs.go:163] Exec(12): stderr: "" I0207 20:42:36.368250 32312 ovs.go:159] Exec(13): /usr/bin/ovs-vsctl --timeout=15 -- clear bridge br-int netflow -- clear bridge br-int sflow -- clear bridge br-int ipfix I0207 20:42:36.372241 32312 ovs.go:162] Exec(13): stdout: "" I0207 20:42:36.372252 32312 ovs.go:163] Exec(13): stderr: "" I0207 20:42:36.372263 32312 udn_isolation.go:90] Starting UDN host isolation manager E0207 20:42:36.372606 32312 node_controller_manager.go:162] Stopping node network controller manager, err=failed to start default node network controller: failed to find kubelet cgroup path: %!w() I0207 20:42:36.372628 32312 nad_controller.go:166] [node-nad-controller NAD controller]: shutting down I0207 20:42:36.372661 32312 ovnkube.go:595] Stopping ovnkube... I0207 20:42:36.372662 32312 metrics.go:553] Stopping metrics server at address "127.0.0.1:29105" I0207 20:42:36.373754 32312 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140 I0207 20:42:36.373792 32312 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140 I0207 20:42:36.374538 32312 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160 I0207 20:42:36.374819 32312 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160 I0207 20:42:36.376769 32312 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160 I0207 20:42:36.376874 32312 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 I0207 20:42:36.377029 32312 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160 I0207 20:42:36.377083 32312 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160 I0207 20:42:36.377393 32312 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 I0207 20:42:36.377405 32312 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 I0207 20:42:36.377449 32312 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 I0207 20:42:36.377564 32312 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160 I0207 20:42:36.377589 32312 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 I0207 20:42:36.377642 32312 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 I0207 20:42:36.377787 32312 factory.go:652] Stopping watch factory I0207 20:42:36.377831 32312 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 I0207 20:42:36.377944 32312 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 I0207 20:42:36.378094 32312 ovnkube.go:599] Stopped ovnkube I0207 20:42:36.378126 32312 metrics.go:553] Stopping metrics server at address "127.0.0.1:29103" F0207 20:42:36.378169 32312 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller - while waiting for any node to have zone: "ip-10-0-200-184.us-east-2.compute.internal", error: context canceled, failed to start node network controller: failed to start default node network controller: failed to find kubelet cgroup path: %!w()]