-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
6.19.0
-
False
-
Artemis Refinement Backlog
-
sat-artemis
-
None
-
None
-
None
-
None
When syncing many small\big repos at once, some repos would not sync properly and almost every time, the failure happens on the distribution update or refresh step at the end.
Reproducible:
Yes
Affected components:
satellite-6.19.0-1.el9sat.noarch
rubygem-katello-4.20.0-0.3.rc1.el9sat.noarch
python3.12-pulpcore-3.85.9-1.el9pc.noarch
python3.12-pulp-rpm-3.32.6-1.el9pc.noarch
python3.12-pulp-container-2.26.6-1.el9pc.noarch
rubygem-pulp_rpm_client-3.32.2-2.el9sat.noarch
rubygem-pulp_container_client-2.26.2-2.el9sat.noarch
Steps to reproduce:
- Install Snap 2.0 of Satellite 6.19.0 on a VM having 24 GB ram and 6 vCPUS
- Import a valid subscription manifest
- Set default redhat and custom repository download policy to be On_Demand
- Enable the following repos in satellite
- Red Hat Enterprise Linux 10 for x86_64 - AppStream Kickstart 10.0
Red Hat Enterprise Linux 10 for x86_64 - AppStream RPMs 10
Red Hat Enterprise Linux 10 for x86_64 - AppStream RPMs 10.1
Red Hat Enterprise Linux 10 for x86_64 - BaseOS Kickstart 10.0
Red Hat Enterprise Linux 10 for x86_64 - BaseOS RPMs 10
Red Hat Enterprise Linux 10 for x86_64 - BaseOS RPMs 10.1
Red Hat Enterprise Linux 7 Server - Extended Life Cycle Support RPMs x86_64
Red Hat Enterprise Linux 7 Server - Extras RPMs x86_64
Red Hat Enterprise Linux 8 for x86_64 - AppStream Kickstart 8.10
Red Hat Enterprise Linux 8 for x86_64 - AppStream RPMs 8
Red Hat Enterprise Linux 8 for x86_64 - AppStream RPMs 8.10
Red Hat Enterprise Linux 8 for x86_64 - BaseOS Kickstart 8.10
Red Hat Enterprise Linux 8 for x86_64 - BaseOS RPMs 8
Red Hat Enterprise Linux 8 for x86_64 - BaseOS RPMs 8.10
Red Hat Enterprise Linux 9 for x86_64 - AppStream Kickstart 9.6
Red Hat Enterprise Linux 9 for x86_64 - AppStream RPMs 9
Red Hat Enterprise Linux 9 for x86_64 - AppStream RPMs 9.6
Red Hat Enterprise Linux 9 for x86_64 - AppStream RPMs 9.7
Red Hat Enterprise Linux 9 for x86_64 - BaseOS Kickstart 9.6
Red Hat Enterprise Linux 9 for x86_64 - BaseOS RPMs 9
Red Hat Enterprise Linux 9 for x86_64 - BaseOS RPMs 9.6
Red Hat Enterprise Linux 9 for x86_64 - BaseOS RPMs 9.7
Red Hat Satellite Client 6 for RHEL 10 x86_64 RPMs
Red Hat Satellite Client 6 for RHEL 7 Server - ELS RPMs x86_64
Red Hat Satellite Client 6 for RHEL 8 x86_64 RPMs
Red Hat Satellite Client 6 for RHEL 9 x86_64 RPMs
rhel10/toolbox
rhel9/toolbox
ubi10/httpd-24
ubi10-micro
ubi10-minimal
ubi10/nginx-126
ubi8/httpd-24
ubi8-micro
ubi8-minimal
ubi8/nginx-118
ubi9/httpd-24
ubi9-micro
ubi9-minimal
ubi9/nginx-126
ubi9/ubi-minimal
ubi9/ubi-stig
- Make sure that each docker type repo has the `latest` tag specific in Include Tags list.
- Sync 10 repos at once and then move onto next 10 to complete the sync for all the repos and observe the behavior of satellite.
- After few days, Select all repos at once from Content --> Sync Status page and trigger a sync for them all at once.
Actual Behavior:
Either on the last step or on the second last step, whenever bulk\concurrent sync is happening and some repos have some new contents to sync, Atleast one or two of the sync tasks would be incomplete with the following error :
Mar 4 21:48:30 satellite pulpcore-worker-4[175991]: pulp [643be620-c8ad-4b22-bcf0-06265f44cebf]: pulpcore.tasking.tasks:INFO: Starting task id: 019cb9a2-01cd-76e1-ae6f-bd2c6e60fc69 in domain: default, task_type: pulpcore.app.tasks.base.ageneral_update, immediate: True, deferred: True Mar 4 21:48:35 satellite pulpcore-worker-4[175991]: pulp [643be620-c8ad-4b22-bcf0-06265f44cebf]: pulpcore.tasking.tasks:INFO: Immediate task 019cb9a2-01cd-76e1-ae6f-bd2c6e60fc69 timed out after 5 seconds. Mar 4 21:48:37 satellite pulpcore-worker-4[175991]: pulp [643be620-c8ad-4b22-bcf0-06265f44cebf]: pulpcore.tasking.tasks:INFO: Task[pulpcore.app.tasks.base.ageneral_update] 019cb9a2-01cd-76e1-ae6f-bd2c6e60fc69 failed (RuntimeError: Immediate task timed out after 5 seconds.) in domain: default Mar 4 21:48:37 satellite pulpcore-worker-4[175991]: pulp [643be620-c8ad-4b22-bcf0-06265f44cebf]: pulpcore.tasking.tasks:INFO: File "/usr/lib/python3.12/site-packages/pulpcore/tasking/tasks.py", line 103, in _execute_task Mar 4 21:48:37 satellite pulpcore-worker-4[175991]: raise RuntimeError(
# pulp task show --uuid 019cb9a2-01cd-76e1-ae6f-bd2c6e60fc69
{
"pulp_href": "/pulp/api/v3/tasks/019cb9a2-01cd-76e1-ae6f-bd2c6e60fc69/",
"prn": "prn:core.task:019cb9a2-01cd-76e1-ae6f-bd2c6e60fc69",
"pulp_created": "2026-03-04T16:15:28.217566Z",
"pulp_last_updated": "2026-03-04T16:15:28.208616Z",
"state": "failed",
"name": "pulpcore.app.tasks.base.ageneral_update",
"logging_cid": "643be620-c8ad-4b22-bcf0-06265f44cebf",
"created_by": "/pulp/api/v3/users/1/",
"unblocked_at": "2026-03-04T16:17:52.735450Z",
"started_at": "2026-03-04T16:18:30.084494Z",
"finished_at": "2026-03-04T16:18:35.748848Z",
"error": {
"traceback": " File \"/usr/lib/python3.12/site-packages/pulpcore/tasking/tasks.py\", line 103, in _execute_task\n raise RuntimeError(\n",
"description": "Immediate task timed out after 5 seconds."
},
"worker": "/pulp/api/v3/workers/019c8f1e-0c51-70d6-bca0-c3a20d0c91c2/",
"parent_task": null,
"child_tasks": [],
"task_group": null,
"progress_reports": [],
"created_resources": [],
"reserved_resources_record": [
"pdrn:c6063b89-d692-44bd-8515-49f9a14a61ee:distributions",
"shared:prn:core.domain:c6063b89-d692-44bd-8515-49f9a14a61ee"
]
}
And the satellite WebUI shows the error on `Actions::Pulp3::Repository::RefreshDistribution` step.
Expected results:
No such error. Either gracefully handle the distribution update\refresh task or else allow configuring a longer timeout than 5 seconds.
Additional Notes:
There are customers who has much larger set of repos being synced via daily or weekly sync plans and across several orgs.
They can very easily run into the same issue if the distribution updates are taking more than 5 seconds to complete at pulp level.
And i have never seen this happening for Satellite 6.18. There, the performance was smoother than this and pulp was kind of self-healing the tasks even if some workers were getting timed out.