-
Bug
-
Resolution: Done-Errata
-
None
-
False
-
-
False
-
CLOSED
-
---
-
---
-
-
-
Storage Core Sprint 230, Storage Core Sprint 231
-
None
+++ This bug was initially created as a clone of Bug #2158608 +++
Description of problem: Currently the pods associated with dataimportcron cronjob are not cleaned up after the subsequent job completes successfully.
Version-Release number of selected component (if applicable):
4.11.2-30
How reproducible:
100%
Steps to Reproduce:
1. NA
2.
3.
Actual results:
This is specifically problematic to have failed pods staying in the cluster, when the subsequent job seems to have completed successfully
===============
hyperconverged-cluster-cli-download-599866857f-8c4dh 1/1 Running 0 19h
initial-job-centos-7-image-cron-e9554b3f-txq86 0/1 Completed 0 19h
initial-job-centos-7-image-cron-e9554b3f-x482d 0/1 Error 0 19h
initial-job-centos-stream8-image-cron-2971e66f-26pm8 0/1 Completed 0 19h
initial-job-centos-stream9-image-cron-e38db61b-5744p 0/1 Completed 0 19h
initial-job-fedora-image-cron-6ef84834-vldvd 0/1 Completed 0 19h
==============
This impacts full regression, as we are designed to catch any pods in failed state.
Expected results:
If a job succeeds eventually the associated pods should be cleaned up.
Additional info:
— Additional comment from Arnon Gilboa on 2023-01-08 16:43:56 UTC —
Debarati, is it ok that we keep the jobs/pods for some TTLSecondsAfterFinished (both Complete and Failed)? say 10 sec?
— Additional comment from Debarati Basu-Nag on 2023-01-09 15:30:25 UTC —
@agilboa@redhat.com yes, it would work for us if these pods/jobs are kept for some TTLSecondsAfterFinished with a lower number like 10 seconds.
- is blocked by
-
CNV-23977 [2158608] [4.11] Failed/successful pods associated with dataimportcron jobs needs to be cleaned up
- Closed
- external trackers