-
Bug
-
Resolution: Done
-
Normal
-
6.11.5
-
0
-
False
-
-
False
-
CLOSED
-
5,950
-
-
-
Moderate
-
None
Description of problem:
During large (>500GB) content exports, there is a massive increase in time taken for the export and this is due to a very long running postgres thread prior to the export files being written.
This BZ is raised in relation to support case #03609607.
During a 512GB export, 21hours was spent by the following processes before any export files are written:
100% CPU util for: "postgres: parallel worker for PID 12890"
PID 12890: "postgres: pulp pulpcore ::1946170) SELECT"
Version-Release number of selected component (if applicable):
6.11.5.4
How reproducible:
Everytime
Steps to Reproduce:
Testing has been done on a 512GB export that contains:
Repositories: 27 (both Red Hat and EPEL)
RPM Packages: ~166,000
Size of export: 512GB
Command run: time hammer content-export complete cersion --content-view testexport --version 2 --organization-id 1 --chunk-size-gb 200
Actual results:
Time to create: ~25.5hrs
Expected results:
Ideally less than 6hrs to create the export.
Additional info:
Please see the support case #03609607 which has a lot more contextual information.
The customer require being able to produce +2.1TB exports in a timely manner.
Testing of exporting a 2.1TB export that contains:
Repositories: 151 (both Red Hat and 3rd party repos)
RPM Packages: >200,000
Size of export: 2.1TB
Time to create: ~37hrs
Why we think this needs to be looked at is due to the time taken for a smaller export of 300GB:
Repositories: 9 (both Red Hat and EPEL)
RPM Packages: ~130,000
Size of export: 306GB
Time to create: 3hrs 25mins
There is a non-linear growth in the time taken for larger exports.
- is duplicated by
-
SAT-20653 Incremental export using fs-export generates very slow queries and takes huge time to export
- Closed
- external trackers