-
Bug
-
Resolution: Duplicate
-
Blocker
-
None
-
None
-
Quality / Stability / Reliability
-
False
-
-
False
-
NEW
-
-
None
Description of problem:
We frequently see specific types of failures occur on the cnv-4.12-network-ovn lane:
https://main-jenkins-csb-cnvqe.apps.ocp-c1.prod.psi.redhat.com/job/test-kubevirt-cnv-4.12-network-ovn-ocs/
These usually present themselves as a test failure with the following error:
Unexpected Warning event received: testvmi-pnsj4,77452e3c-155a-44dd-bfa0-43ffacfe9bb5: failed to detect root mount point of containerDisk disk0 on the node: no mount containing / found in the mount namespace of pid 1 Expected <string>: Warning not to equal <string>: Warning
And are exclusive to the network 4.12 lane.
This bug is about investigating the root cause for these.
Current findings:
/proc/1/mountinfo on the node doesn't have the containerdisk container mount in it (the mount doesn't exist?)
[test_id:676] seems to trigger it often (local cluster-sync/functest against external cluster)
This happens from 4.12.6 onwards (4.12.5 doesn't get these errs)
Diff between 4.12.5 and 4.12.6
Seems that podman is responsible for the unmount.
The lane passed successfully when podman was uninstalled from the node.
We suspect that the network tests are creating bridges on the node, this invokes `/usr/local/bin/resolv-prepender.sh` which creates a podman container.
Since the node is using crio for k8s there is some conclusion between podman and crio.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
- blocks
-
CNV-29847 Investigate root cause of failing containerdisk tests in the cnv-4.12-network-ovn lane
-
- Closed
-