Uploaded image for project: 'Satellite'
  1. Satellite
  2. SAT-28151

pulpcore-worker higher memory usage when syncing a custom repo

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • 6.15.0
    • Pulp
    • False
    • Hide

      None

      Show
      None
    • False
    • 0
    • Moderate
    • None

      Description of problem:
      When synchronizing either of two Grafana repos (they are very similar, afaik), memory usage of `pulpcore-worker` is high, up to 6GB. Moreover the usage grew since 6.14 by 10% so a kind of UX regression (previously, Capsule sync of many CVs containing Grafana repos worked still well, now the higher memory demand triggersOOM killer).

      The memory consumption might be appropriate as well as the increase since 6.14 - the Grafana repos have big repodata. But since the mem.usage is so huge and there is an evident increase in 6.14->6.15 upgrade, I would like to have some more formal confirmation "it is still ok" (but an improvement in memory consumption is preferred, of course).
       
      The two Grafana repos:

      https://rpm.grafana.com/
      https://packages.grafana.com/oss/rpm
      

      How reproducible:
      100%
       

      Is this issue a regression from an earlier version:
      yes? since memory usage bigger by 10% when syncing the repo
       

      Steps to Reproduce:
      1. Run on various Satellite versions below script:

      hammer product delete --organization-id 1 --name Grafana_03919224_product # this will fail in 1st run, that's OK
      
      hammer product create --organization-id 1 --name Grafana_03919224_product
      
      hammer repository create --organization-id 1 --product Grafana_03919224_product --name Grafana_03919224_repo --content-type yum --download-policy on_demand --url https://rpm.grafana.com/
      
      while true; do date; ps aux | grep -v grep | grep pulpcore-worker; sleep 1; done > pulpcore-worker.ps.grafana.log &
      pid=$!
      sleep 5
      
      repoid=$(hammer repository list | grep Grafana_03919224_repo | awk '{ print $1 }')
      hammer repository sync --organization-id 1 --product Grafana_03919224_product --id $repoid
      sleep 5
      kill $pid
      
      sort -nk6 pulpcore-worker.ps.grafana.log | tail
      

      2. Check the output of last line - biggest memory consumption of a pulp worker over the time.

      3. Re-run the script once again. It will delete the product and recreate everything again. And check memory usage again - dunno why but second (and any subsequent) repo sync does consume much more memory - on either Satellite version.

      Actual behavior:
      Comparison of memory usage on different Satellite versions:

      https://rpm.grafana.com:
      FIRST SYNC:
      6.12: 4970472 = 4.74 GiB
      6.13: 5083524 = 4.85 GiB
      6.14: 4225924 = 4.03 GiB
      6.15: 4675268 = 4.46 GiB
      
      NEXT SYNC:
      6.12: 5434556 = 5.18 GiB
      6.13: 5510888 = 5.26 GiB
      6.14: 5518836 = 5.26 GiB
      6.15: 6006344 = 5.73 GiB
      6.16: 5836332 = 5.57 GiB
      
      https://packages.grafana.com/oss/rpm:
      FIRST SYNC: (havent measured)
      
      NEXT SYNC:
      6.12: 5440196 = 5.19 GiB
      6.13: 5428680 = 5.17 GiB
      6.14: 5435956 = 5.18 GiB
      6.15: 6017092 = 5.74 GiB
      6.16: 5862764 = 5.60 GiB
      

      Expected behavior:
      "Lower memory usage". Hard to say what is the reasonable value, but at least the mem.usage should be on the 6.12-6.14 level (see the increase in 6.15).

      Business Impact / Additional info:
      During Capsule sync, 4 or 8 pulpcore workers require 6GB of RAM each when syncing the Grafana repos present in multiple versions of several CVs. That requires more memory than the Capsule is recommended to have (as a minimum).

              Unassigned Unassigned
              rhn-support-pmoravec Pavel Moravec
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: