Tekton resource pruner on cluster with 80 namespaces creates 80 containers in pod, where each runs tkn command. Few containers are failing with `Error: Get "https://172.30.0.1:443/api?timeout=32s": dial tcp 172.30.0.1:443: i/o timeout`.
# oc get TektonConfig config -o jsonpath='{.spec.pruner}' {"keep":10,"resources":["pipelinerun"],"schedule":"0/10 * * * *"}
# oc logs --all-containers=true -f -n openshift-pipelines --max-log-requests=100 tekton-resource-pruner-97kx9-27488710-zxhfv All but 10 PipelineRuns(Completed) deleted in namespace "mkovarik" All but 10 PipelineRuns(Completed) deleted in namespace "default" Error: Get "https://172.30.0.1:443/api?timeout=32s": dial tcp 172.30.0.1:443: i/o timeout All but 10 PipelineRuns(Completed) deleted in namespace "damoreno" Error: Get "https://172.30.0.1:443/api?timeout=32s": dial tcp 172.30.0.1:443: i/o timeout ...
Pipeline version: v0.28.3
- clones
-
SRVKP-2160 tekton-resource-pruner job failing when having many containers
- Closed
- is documented by
-
RHDEVDOCS-4168 Document solution for tekton-resource-pruner job failing when having many containers
- Open