-
Bug
-
Resolution: Unresolved
-
Critical
-
None
-
rhos-18.0.0
-
None
[kni@provisioner ~]$ oc get pods | grep ovn-controller-ovs
ovn-controller-ovs-vdvfj 1/2 CrashLoopBackOff 1470 (63s ago) 3d7h
ovn-controller-ovs-ztz7z 1/2 CrashLoopBackOff 1486 (116s ago) 3d7h
[kni@provisioner ~]$
[kni@provisioner ~]$ oc rsh -n openstack ovn-controller-ovs-vdvfj /usr/local/bin/container-scripts/ovsdb_server_liveness.sh
Defaulted container "ovsdb-server" out of: ovsdb-server, ovs-vswitchd, ovsdb-server-init (init)
error: unable to upgrade connection: container not found ("ovsdb-server")
[kni@provisioner ~]$ oc get pods | grep ovn-controller-ovs ovn-controller-ovs-vdvfj 1/2 CrashLoopBackOff 1470 (3m25s ago) 3d7h
ovn-controller-ovs-ztz7z 1/2 CrashLoopBackOff 1486 (4m18s ago) 3d7h
[kni@provisioner ~]$ oc get pods | grep ovn-controller-ovs
ovn-controller-ovs-vdvfj 1/2 CrashLoopBackOff 1470 (4m5s ago) 3d7h
ovn-controller-ovs-ztz7z 1/2 CrashLoopBackOff 1486 (4m58s ago) 3d7h
[kni@provisioner ~]$ oc get pods | grep ovn-controller-ovs
ovn-controller-ovs-vdvfj 1/2 CrashLoopBackOff 1470 (4m32s ago) 3d7h
ovn-controller-ovs-ztz7z 1/2 Running 1487 (5m25s ago) 3d7h
[kni@provisioner ~]$ oc get pods | grep ovn-controller-ovs
ovn-controller-ovs-vdvfj 1/2 CrashLoopBackOff 1470 (4m36s ago) 3d7h
ovn-controller-ovs-ztz7z 2/2 Running 1487 (5m29s ago) 3d7h
[kni@provisioner ~]$ oc get pods | grep ovn-controller-ovs ovn-controller-ovs-vdvfj 1/2 Running 1471 (5m15s ago) 3d7h
It seems related to the hardcoded liveness probe settings for the ovn-controller-ovs DaemonSet that is taking too long.
It seems we cannot tweak this value to check if this solves the issue.
Fresh must-gather reports are available for review