-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.16.z
-
None
-
None
-
False
-
-
Description of problem:
During a system test we pods add data file / remove file , verify file still exists we find randomly seeing no deletion of the file. In this latest case, manually execute we found two of 30 pods.
4.16.13
How reproducible:
Many but not 100%
Steps to Reproduce:
1. Deploy 4.16.13 SNO with DU profile
2. Run edge_tests/ecosystem/fec/du/test_ztp_du_local_volumes_content_cleanup.py
du/test_ztp_du_local_volumes_content_cleanup.py::test_ztp_du_local_volumes_content_cleanup - AssertionError: Pods with unclean PVCs found: ['hello-world-10', 'hello-world-4']
FAILED edge_tests/ecosystem/fec-du/test_ztp_du_local_volumes_content_cleanup.py::test_ztp_du_local_volumes_content_cleanup - AssertionError: Pods with unclean PVCs found: ['hello-world-10', 'hello-world-4']
To manually: go into directory
cd /var/lib/jenkins/workspace/ocp-edge-auto-tests/ocp-edge-auto
source ocp-edge-venv/bin/activate
pip install -r requirements.txt
pytest edge_tests/ecosystem/fec-du/test_ztp_du_local_volumes_content_cleanup.py
Actual results:
After the deletion loop on all of the 30 pods the test file should have been erased but it it is randomly not happening. In this latest case, two of the pods had the issue.
Expected results:
Expect content be erased
Additional info:
Test case code:
https://gitlab.cee.redhat.com/ocp-edge-qe/ocp-edge-auto/-/blob/master/edge_tests/ecosystem/fec-du/test_ztp_du_local_volumes_content_cleanup.py?ref_type=heads
oc get pods -n ztp-testns
NAME READY STATUS RESTARTS AGE
hello-world-0 1/1 Running 0 21m
hello-world-1 1/1 Running 0 21m
hello-world-10 1/1 Running 0 21m
hello-world-11 1/1 Running 0 21m
hello-world-12 1/1 Running 0 21m
hello-world-13 1/1 Running 0 21m
hello-world-14 1/1 Running 0 21m
hello-world-15 1/1 Running 0 21m
hello-world-16 1/1 Running 0 21m
hello-world-17 1/1 Running 0 21m
hello-world-18 1/1 Running 0 21m
hello-world-19 1/1 Running 0 21m
hello-world-2 1/1 Running 0 21m
hello-world-20 1/1 Running 0 21m
hello-world-21 1/1 Running 0 21m
hello-world-22 1/1 Running 0 21m
hello-world-23 1/1 Running 0 21m
hello-world-24 1/1 Running 0 21m
hello-world-25 1/1 Running 0 21m
hello-world-26 1/1 Running 0 21m
hello-world-27 1/1 Running 0 21m
hello-world-28 1/1 Running 0 21m
hello-world-29 1/1 Running 0 21m
hello-world-3 1/1 Running 0 21m
hello-world-4 1/1 Running 0 21m
hello-world-5 1/1 Running 0 21m
hello-world-6 1/1 Running 0 21m
hello-world-7 1/1 Running 0 21m
hello-world-8 1/1 Running 0 21m
hello-world-9 1/1 Running 0 21m
hello-world-4 & hello-world-10 you can see the data still present but on other example shown it is not present (asit should be), hence script creates failure.
oc rsh -n ztp-testns hello-world-0 ls /data
[kni@registry.kni-qe-31 ~]$ oc rsh -n ztp-testns hello-world-1 ls /data
[kni@registry.kni-qe-31 ~]$ oc rsh -n ztp-testns hello-world-4 ls /data
2024-09-20T15:36:15+0000.txt
[kni@registry.kni-qe-31 ~]$ oc rsh -n ztp-testns hello-world-10 ls /data
2024-09-20T15:39:33+0000.txt
[kni@registry.kni-qe-31 ~]$ oc rsh -n ztp-testns hello-world-11 ls /data
oc describe pods -n ztp-testns hello-world-4
Name: hello-world-4
Namespace: ztp-testns
Priority: 0
Service Account: default
Node: sno-4.kni-qe-32.lab.eng.rdu2.redhat.com/10.1.101.17
Start Time: Fri, 20 Sep 2024 15:40:36 +0000
Labels: <none>
Annotations: k8s.ovn.org/pod-networks:
{"default":{"ip_addresses":["10.128.0.84/23","fd01:0:0:1::176e/64"],"mac_address":"0a:58:0a:80:00:54","gateway_ips":["10.128.0.1","fd01:0:...
k8s.v1.cni.cncf.io/network-status:
[{
"name": "ovn-kubernetes",
"interface": "eth0",
"ips": [
"10.128.0.84",
"fd01:0:0:1::176e"
],
"mac": "0a:58:0a:80:00:54",
"default": true,
"dns": {}
},{
"name": "ztp-testns/ztp-sriov-nw-du-mh",
"interface": "net1",
"mac": "86:b3:31:48:94:e8",
"mtu": 1500,
"dns": {},
"device-info": {
"type": "pci",
"version": "1.1.0",
"pci":
}
}]
k8s.v1.cni.cncf.io/networks: [
]
openshift.io/scc: restricted-v2
seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Running
SeccompProfile: RuntimeDefault
IP: 10.128.0.84
IPs:
IP: 10.128.0.84
IP: fd01:0:0:1::176e
Containers:
hello-world:
Container ID: cri-o://f3b0856398f61d5250390a6b3c8bea39c2c0966846afe6c98c5a8761a2aeb28b
Image: registry.kni-qe-31.lab.eng.rdu2.redhat.com:5000/rhscl/httpd-24-rhel7:latest
Image ID: registry.kni-qe-31.lab.eng.rdu2.redhat.com:5000/rhscl/httpd-24-rhel7@sha256:39d6e32c87f3cbd253cf1e91f99f0eec8984a9c1b5c49bd5e4419eecfab82d1a
Port: 8080/TCP
Host Port: 0/TCP
SeccompProfile: RuntimeDefault
State: Running
Started: Fri, 20 Sep 2024 15:40:40 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 600m
memory: 100M
openshift.io/du_mh: 1
Requests:
cpu: 600m
memory: 100M
openshift.io/du_mh: 1
Environment:
service_name: hello-world
Mounts:
/data from local-disk (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4frd8 (ro)
/var/www/html/ from html-index-file (rw)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
html-index-file:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: index-html-configmap
Optional: false
local-disk:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hello-world-pvc-4
ReadOnly: false
kube-api-access-4frd8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional: <nil>
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 34m default-scheduler Successfully assigned ztp-testns/hello-world-4 to sno-4.kni-qe-32.lab.eng.rdu2.redhat.com
Normal AddedInterface 34m multus Add eth0 [10.128.0.84/23 fd01:0:0:1::176e/64] from ovn-kubernetes
Normal AddedInterface 34m multus Add net1 [] from ztp-testns/ztp-sriov-nw-du-mh
Normal Pulled 34m kubelet Container image "registry.kni-qe-31.lab.eng.rdu2.redhat.com:5000/rhscl/httpd-24-rhel7:latest" already present on machine
Normal Created 34m kubelet Created container hello-world
Normal Started 34m kubelet Started container hello-world