Test VM: 16 vCPU, 32 GiB RAM, RHEL 9.6
[root@rhel ~]# podman version
Client: Podman Engine
Version: 5.4.0
API Version: 5.4.0
Go Version: go1.23.9 (Red Hat 1.23.9-1.el9_6)
Built: Wed Jun 4 12:43:04 2025
OS/Arch: linux/amd64
Imageset configuration for mirroring:
kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v2alpha1
mirror:
additionalImages:
- name: quay.io/modh/rocm-notebooks@sha256:b7960fd8d47794ec6b058924563277ecdc1662b5cb4c84489a5f7429a40b576d
- name: quay.io/modh/rocm-notebooks@sha256:d6ba168ec7d4bb59cef15a5b17855386da3bff8dadf9b156a1ae3bd1b670a3e7
- name: quay.io/modh/runtime-images@sha256:3f767efdad4e6cbf193e1d0edfe6cb80b50fcea2587cfb280c26b47309bf4cd8
- name: quay.io/modh/runtime-images@sha256:4c3098b7a369b8dad113be97f35ea202dde0a03d3d08a2c010e3b81209b39735
- name: quay.io/modh/runtime-images@sha256:f4eb99da308e39f62c5794775f3f0e412a97a92121bc37bf47fb76f19482321e
- name: quay.io/modh/text-generation-inference@sha256:aebf545d8048a59174f70334dc90c6b97ead4602a39cb7598ea68c8d199168a2
- name: quay.io/modh/vllm@sha256:4f550996130e7d16cacb24ca9a2865e7cf51eddaab014ceaf31a1ea6ef86d4ec
- name: quay.io/modh/vllm@sha256:7e1d1985b0dd2b5ba2df41fc9c8c3edf13a2d9ed8a4d84db8f00eb6c753bc5c
1st REPRODUCER: SQLite database, 1 gunicorn-registry worker, running just oc-mirror
RESULT: failure to push, 502 on PUT requests
nginx stdout | 2025/07/04 13:21:00 [error] 111#0: *1469 upstream prematurely closed connection while reading response header from upstream, client: 172.24.10.50, server: _, request: "PUT /v2/rhel/modh/vllm/manifests/sha256-7e1d1985b0dd2b5ba2df41fc9c8c3edf13a2d9ed8a4d84db8f00eb6c753bc5c5 HTTP/1.1", upstream: "http://unix:/tmp/gunicorn_registry.sock:/v2/rhel/modh/vllm/manifests/sha256-7e1d1985b0dd2b5ba2df41fc9c8c3edf13a2d9ed8a4d84db8f00eb6c753bc5c5", host: "rhel.skynet:8443"
gunicorn-registry stdout | 2025-07-04 13:21:00,756 [66] [ERROR] [gunicorn.error] Worker (pid:1028) was sent SIGKILL! Perhaps out of memory?
...
nginx stdout | 172.24.10.50 (-) - - [04/Jul/2025:13:21:00 +0000] "PUT /v2/rhel/modh/vllm/manifests/sha256-7e1d1985b0dd2b5ba2df41fc9c8c3edf13a2d9ed8a4d84db8f00eb6c753bc5c5 HTTP/1.1" 502 287 "-" "oc-mirror" (32.877 1622 32.873 : 0.002)
3 successful pushes, 5 failures.
2nd REPRODUCER: SQLite database, 1 gunicorn-registry worker, running oc-mirror, skopeo and podman pull at the same time
- used "skopeo copy" to copy a set of images to the registry:
# for image in drupal joomla wordpress mariadb debian ubuntu alpine; do skopeo copy docker://$image:latest docker://rhel.skynet:8443/testuser/$image:latest; done
- constant parallel pull of said images from the registry:
# while true; do for image in drupal joomla wordpress mariadb debian ubuntu alpine; do podman pull rhel.skynet:8443/testuser/$image:latest & done; sleep 30; podman rmi -f $(podman images -a -q); done
- 5 minutes into the push with `oc-mirror` started pushing complete manifest list to the registry:
# for image in drupal joomla wordpress mariadb debian ubuntu alpine; do skopeo copy -a docker://$image docker://rhel.skynet:8443/testuser/$image:latest; done
RESULT: failure to push, 413 on PATCH requests
...
✗ (10m43s) vllm@sha256:7e1d1985b0dd2b5ba2df41fc9c8c3edf13a2d9ed8a4d84db8f00eb6c753bc5c5 ➡️ rhel.skynet:8443/rhel/modh/
✗ (10m43s) vllm@sha256:4f550996130e7d16cacb24ca9a2865e7cf51eddaab014ceaf31a1ea6ef86d4ec ➡️ rhel.skynet:8443/rhel/modh/
✗ (10m43s) runtime-images@sha256:4c3098b7a369b8dad113be97f35ea202dde0a03d3d08a2c010e3b81209b39735 ➡️ rhel.skynet:8443/rhel/modh/
8 / 8 (10m43s) [==============================================================================================================================================] 100 %
✗ (10m43s) text-generation-inference@sha256:aebf545d8048a59174f70334dc90c6b97ead4602a39cb7598ea68c8d199168a2 ➡️ rhel.skynet:8443/rhel/modh/
✗ (10m43s) runtime-images@sha256:f4eb99da308e39f62c5794775f3f0e412a97a92121bc37bf47fb76f19482321e ➡️ rhel.skynet:8443/rhel/modh/
✗ (10m43s) runtime-images@sha256:3f767efdad4e6cbf193e1d0edfe6cb80b50fcea2587cfb280c26b47309bf4cd8 ➡️ rhel.skynet:8443/rhel/modh/
✗ (10m43s) rocm-notebooks@sha256:d6ba168ec7d4bb59cef15a5b17855386da3bff8dadf9b156a1ae3bd1b670a3e7 ➡️ rhel.skynet:8443/rhel/modh/
✗ (10m43s) rocm-notebooks@sha256:b7960fd8d47794ec6b058924563277ecdc1662b5cb4c84489a5f7429a40b576d ➡️ rhel.skynet:8443/rhel/modh/
2025/07/04 16:24:47 [INFO] : === Results ===
2025/07/04 16:24:47 [INFO] : ✗ 0 / 8 additional images mirrored: Some additional images failed to be mirrored - please check the logs
2025/07/04 16:24:47 [ERROR] : [Worker] error mirroring image quay.io/modh/vllm@sha256:4f550996130e7d16cacb24ca9a2865e7cf51eddaab014ceaf31a1ea6ef86d4ec error: copying image 1/1 from manifest list: writing blob: uploading layer chunked: StatusCode: 413, "\r\n
413 Request Entity Too Large<..."
2025/07/04 16:24:47 [ERROR] : [Worker] error mirroring image quay.io/modh/vllm@sha256:7e1d1985b0dd2b5ba2df41fc9c8c3edf13a2d9ed8a4d84db8f00eb6c753bc5c5 error: copying image 1/1 from manifest list: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
2025/07/04 16:24:47 [ERROR] : [Worker] error mirroring image quay.io/modh/runtime-images@sha256:4c3098b7a369b8dad113be97f35ea202dde0a03d3d08a2c010e3b81209b39735 error: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
2025/07/04 16:24:47 [ERROR] : [Worker] error mirroring image quay.io/modh/rocm-notebooks@sha256:d6ba168ec7d4bb59cef15a5b17855386da3bff8dadf9b156a1ae3bd1b670a3e7 error: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
2025/07/04 16:24:47 [ERROR] : [Worker] error mirroring image quay.io/modh/runtime-images@sha256:3f767efdad4e6cbf193e1d0edfe6cb80b50fcea2587cfb280c26b47309bf4cd8 error: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
2025/07/04 16:24:47 [ERROR] : [Worker] error mirroring image quay.io/modh/runtime-images@sha256:f4eb99da308e39f62c5794775f3f0e412a97a92121bc37bf47fb76f19482321e error: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
2025/07/04 16:24:47 [ERROR] : [Worker] error mirroring image quay.io/modh/rocm-notebooks@sha256:b7960fd8d47794ec6b058924563277ecdc1662b5cb4c84489a5f7429a40b576d error: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
2025/07/04 16:24:47 [ERROR] : [Worker] error mirroring image quay.io/modh/text-generation-inference@sha256:aebf545d8048a59174f70334dc90c6b97ead4602a39cb7598ea68c8d199168a2 error: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
- Quay logs:
# cat quay.log | grep -Pon '\"\w+? \/.+?\/.* HTTP\/\d.\d\" \d{3}' | grep PATCH | grep -v "\" 2.."
27912:"PATCH /v2/rhel/modh/vllm/blobs/uploads/3d8b0fbb-1353-4eb6-b51e-5094ead03911 HTTP/1.1" 408
29355:"PATCH /v2/rhel/modh/runtime-images/blobs/uploads/d13524eb-eec3-4e74-8b9e-b74bd50b40fc HTTP/1.1" 408
137042:"PATCH /v2/rhel/modh/rocm-notebooks/blobs/uploads/fae9fd51-aaff-4a9a-9056-cf866075a75d HTTP/1.1" 413
137097:"PATCH /v2/rhel/modh/rocm-notebooks/blobs/uploads/b60fa691-84f0-4e63-9d5e-2964701f38b7 HTTP/1.1" 413
137128:"PATCH /v2/rhel/modh/runtime-images/blobs/uploads/7d2c3b4c-bdd2-484f-ac13-6f69dc3ea56c HTTP/1.1" 413
137129:"PATCH /v2/testuser/joomla/blobs/uploads/0c8dcd76-4a48-4ff3-a138-9d0e4a1a288b HTTP/1.1" 413
137130:"PATCH /v2/rhel/modh/runtime-images/blobs/uploads/efdd6467-5c04-460d-8609-2004cc516fa0 HTTP/1.1" 413
137133:"PATCH /v2/rhel/modh/text-generation-inference/blobs/uploads/b8817c19-4edb-4e00-9fe3-89b5660d079e HTTP/1.1" 413
137135:"PATCH /v2/rhel/modh/rocm-notebooks/blobs/uploads/7c1ca769-f439-4257-a2d0-a512526708dc HTTP/1.1" 413
137136:"PATCH /v2/rhel/modh/runtime-images/blobs/uploads/75d02313-d28c-4884-bc03-898bfcc249f3 HTTP/1.1" 413
137137:"PATCH /v2/rhel/modh/vllm/blobs/uploads/410a7502-fcb6-4d24-b1cf-10658b320078 HTTP/1.1" 413
137138:"PATCH /v2/rhel/modh/runtime-images/blobs/uploads/5f863151-435d-448a-9a35-3f00ff7df056 HTTP/1.1" 413
137145:"PATCH /v2/rhel/modh/vllm/blobs/uploads/ad3bf7c3-8ddb-4150-a85a-71b4c35f3d40 HTTP/1.1" 413
137150:"PATCH /v2/rhel/modh/runtime-images/blobs/uploads/e42509b3-944a-4964-9e2e-c0cedc7471b0 HTTP/1.1" 413
186762:"PATCH /v2/testuser/ubuntu/blobs/uploads/7ff5e228-a79e-4694-8274-2669617787c4 HTTP/1.1" 400
nginx stdout | 2025/07/04 14:25:09 [error] 103#0: *3624 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 172.24.10.50, server: _, request: "PATCH /v2/rhel/modh/text-generation-inference/blobs/uploads/b8817c19-4edb-4e00-9fe3-89b5660d079e HTTP/1.1", upstream: "http://unix:/tmp/gunicorn_registry.sock:/v2/rhel/modh/text-generation-inference/blobs/uploads/b8817c19-4edb-4e00-9fe3-89b5660d079e", host: "rhel.skynet:8443"
nginx stdout | 2025/07/04 14:25:09 [error] 103#0: *3624 client intended to send too large body: 3985836408 bytes, client: 172.24.10.50, server: _, request: "PATCH /v2/rhel/modh/text-generation-inference/blobs/uploads/b8817c19-4edb-4e00-9fe3-89b5660d079e HTTP/1.1", upstream: "http://unix:/tmp/gunicorn_registry.sock/v2/rhel/modh/text-generation-inference/blobs/uploads/b8817c19-4edb-4e00-9fe3-89b5660d079e", host: "rhel.skynet:8443"
1 "PATCH /v2/rhel/modh/text-generation-inference/blobs/uploads/b8817c19-4edb-4e00-9fe3-89b5660d079e HTTP/1.1" 413
- a lot of raised exceptions for db locks:
# cat quay.log | grep -Po "([A-za-z-]*) ([A-Za-z]*?) \| ([a-z]+)\.([A-Za-z\.]*)\:?(.*)?" | sort | uniq -c
8 buildlogsarchiver stdout | peewee.OperationalError: database is locked
8 exportactionlogsworker stdout | peewee.OperationalError: database is locked
18 gcworker stdout | peewee.OperationalError: database is locked
10 manifestsubjectbackfillworker stdout | peewee.OperationalError: database is locked
6 namespacegcworker stdout | peewee.OperationalError: database is locked
14 notificationworker stdout | peewee.OperationalError: database is locked
2 repositoryactioncounter stdout | peewee.OperationalError: database is locked
4 repositorygcworker stdout | peewee.OperationalError: database is locked
3rd REPRODUCER: PostgreSQL database, 1 gunicorn-registry worker, same image copy setup as in reproducer #2
RESULT: failure to push, 413 on PATCH requests
...
2025/07/04 18:06:30 [INFO] : === Results ===
2025/07/04 18:06:30 [INFO] : ✗ 2 / 8 additional images mirrored: Some additional images failed to be mirrored - please check the logs
2025/07/04 18:06:30 [ERROR] : [Worker] error mirroring image quay.io/modh/runtime-images@sha256:4c3098b7a369b8dad113be97f35ea202dde0a03d3d08a2c010e3b81209b39735 error: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
2025/07/04 18:06:30 [ERROR] : [Worker] error mirroring image quay.io/modh/vllm@sha256:7e1d1985b0dd2b5ba2df41fc9c8c3edf13a2d9ed8a4d84db8f00eb6c753bc5c5 error: copying image 1/1 from manifest list: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
2025/07/04 18:06:30 [ERROR] : [Worker] error mirroring image quay.io/modh/rocm-notebooks@sha256:b7960fd8d47794ec6b058924563277ecdc1662b5cb4c84489a5f7429a40b576d error: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
2025/07/04 18:06:30 [ERROR] : [Worker] error mirroring image quay.io/modh/vllm@sha256:4f550996130e7d16cacb24ca9a2865e7cf51eddaab014ceaf31a1ea6ef86d4ec error: copying image 1/1 from manifest list: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
2025/07/04 18:06:30 [ERROR] : [Worker] error mirroring image quay.io/modh/runtime-images@sha256:3f767efdad4e6cbf193e1d0edfe6cb80b50fcea2587cfb280c26b47309bf4cd8 error: writing blob: uploading layer chunked: StatusCode: 413, "\r\n413 Request Entity Too Large<..."
2025/07/04 18:06:30 [ERROR] : [Worker] error mirroring image quay.io/modh/runtime-images@sha256:f4eb99da308e39f62c5794775f3f0e412a97a92121bc37bf47fb76f19482321e error: writing blob: Patch "https://rhel.skynet:8443/v2/rhel/modh/runtime-images/blobs/uploads/df5130ec-5054-43b4-b671-a3d1c0758f6b": context deadline exceeded
...
- Quay logs:
[root@rhel ~]# cat quay-app-postgresql-1-worker.log | grep -Po '\"\w+? \/.+?\/.* HTTP\/\d.\d\" \d{3}' | sort | uniq -c | grep PATCH | grep -v "\" 2.."
1 "PATCH /v2/rhel/modh/rocm-notebooks/blobs/uploads/eb256cd9-7b3b-4db5-8838-c98db4c3cbfa HTTP/1.1" 413
1 "PATCH /v2/rhel/modh/runtime-images/blobs/uploads/30e839f5-a88b-4fab-bbc5-9b04b1d98c24 HTTP/1.1" 413
1 "PATCH /v2/rhel/modh/runtime-images/blobs/uploads/df5130ec-5054-43b4-b671-a3d1c0758f6b HTTP/1.1" 400
1 "PATCH /v2/rhel/modh/runtime-images/blobs/uploads/ef5e9ea0-2870-492a-95e4-e7c42bfaaf51 HTTP/1.1" 413
1 "PATCH /v2/rhel/modh/vllm/blobs/uploads/01437a13-e73f-4d68-a9d5-8fd43f9aeff2 HTTP/1.1" 413
1 "PATCH /v2/rhel/modh/vllm/blobs/uploads/0b990d77-7ec9-4052-a0fa-b100e3badd9b HTTP/1.1" 413
nginx stdout | 2025/07/04 15:36:23 [error] 103#0: *1612 upstream prematurely closed connection while reading response header from upstream, client: 172.24.10.50, server: _, request: "PATCH /v2/rhel/modh/rocm-notebooks/blobs/uploads/eb256cd9-7b3b-4db5-8838-c98db4c3cbfa HTTP/1.1", upstream: "http://unix:/tmp/gunicorn_registry.sock:/v2/rhel/modh/rocm-notebooks/blobs/uploads/eb256cd9-7b3b-4db5-8838-c98db4c3cbfa", host: "rhel.skynet:8443"
nginx stdout | 2025/07/04 15:36:23 [error] 103#0: *1612 client intended to send too large body: 5523230050 bytes, client: 172.24.10.50, server: _, request: "PATCH /v2/rhel/modh/rocm-notebooks/blobs/uploads/eb256cd9-7b3b-4db5-8838-c98db4c3cbfa HTTP/1.1", upstream: "http://unix:/tmp/gunicorn_registry.sock/v2/rhel/modh/rocm-notebooks/blobs/uploads/eb256cd9-7b3b-4db5-8838-c98db4c3cbfa", host: "rhel.skynet:8443"
nginx stdout | 172.24.10.50 (-) - - [04/Jul/2025:15:36:23 +0000] "PATCH /v2/rhel/modh/rocm-notebooks/blobs/uploads/eb256cd9-7b3b-4db5-8838-c98db4c3cbfa HTTP/1.1" 413 183 "-" "oc-mirror" (1661.482 4501082946 1661.475)
- no exceptions raised for db locks
- atop data for gunicorn processes:
74716 - 2 0 4.0K 28.3M 129.4M 136.0K 0.0K 456.5M 140.8M 0B 0B 0B 0B 1001 1001 0% gunicorn
74960 - 0 0 4.0K 27.5M 162.0M 144.0K 0.0K 1.0G 139.6M 0B 0B 0B 0B 1001 1001 0% gunicorn
74953 - 2 0 4.0K 28.3M 130.4M 136.0K 0.0K 457.5M 132.4M 0B 0B 0B 0B 1001 1001 0% gunicorn
- registry process constantly consumed around 1 GiB of memory.
4th REPRODUCER: PostgreSQL 15, Quay running with 8 gunicorn-registry workers, same image copy setup as in reproducer #2
gunicorn-secscan stdout | 2025-07-04 15:55:32,681 [67] [DEBUG] [__config__] Starting secscan gunicorn with 8 workers and gevent worker class
gunicorn-web stdout | 2025-07-04 15:55:34,038 [68] [DEBUG] [__config__] Starting web gunicorn with 8 workers and gevent worker class
gunicorn-registry stdout | 2025-07-04 15:55:35,022 [66] [DEBUG] [__config__] Starting registry gunicorn with 8 workers and gevent worker class
RESULT: all pushes succeeded properly and without errors.
[root@rhel ~]# cat quay-app-postgresql-8-workers.log | grep -Po '\"\w+? \/.+?\/.* HTTP\/\d.\d\" \d{3}' | sort | uniq -c | grep PATCH | grep -v "\" 2.." | wc -l
0
- no worker timeout reported:
[root@rhel ~]# grep -i "timeout" quay-app-postgresql-8-workers.log
gunicorn-registry stdout | 2025-07-04 15:55:34,365 [66] [INFO] [data.database] Connection pooling enabled for postgresql; stale timeout: None; max connection count: None
Memory usage of all gunicorn-registry processes:
[root@rhel ~]# ps auxf | grep -i "registry:app" | grep -v '\-\-color' | awk '{sum1 += $5; sum2 += $6} END {print "VSS: " sum1", RSS: " sum2}'
VSS: 4166320, RSS: 1172688
- individual gunicorn-registry worker memory consumption:
[root@rhel ~]# ps auxf | grep -i "registry:app" | grep -v '\-\-color'
1001 85205 0.1 0.4 448076 131356 ? SN 17:55 0:04 \_ /usr/bin/python3 /app/bin//gunicorn -c /quay-registry/conf/gunicorn_registry.py registry:application
1001 85450 6.3 0.4 456268 130652 ? SN 17:55 2:43 | \_ /usr/bin/python3 /app/bin//gunicorn -c /quay-registry/conf/gunicorn_registry.py registry:application
1001 85457 1.2 0.3 455244 129872 ? SN 17:55 0:32 | \_ /usr/bin/python3 /app/bin//gunicorn -c /quay-registry/conf/gunicorn_registry.py registry:application
1001 85458 7.9 0.3 455244 129856 ? SN 17:55 3:24 | \_ /usr/bin/python3 /app/bin//gunicorn -c /quay-registry/conf/gunicorn_registry.py registry:application
1001 85459 2.7 0.4 529744 130512 ? SN 17:55 1:10 | \_ /usr/bin/python3 /app/bin//gunicorn -c /quay-registry/conf/gunicorn_registry.py registry:application
1001 85460 7.4 0.3 455500 130000 ? SN 17:55 3:12 | \_ /usr/bin/python3 /app/bin//gunicorn -c /quay-registry/conf/gunicorn_registry.py registry:application
1001 85461 17.1 0.3 454732 130240 ? SN 17:55 7:19 | \_ /usr/bin/python3 /app/bin//gunicorn -c /quay-registry/conf/gunicorn_registry.py registry:application
1001 85462 10.1 0.4 455756 130320 ? SN 17:55 4:20 | \_ /usr/bin/python3 /app/bin//gunicorn -c /quay-registry/conf/gunicorn_registry.py registry:application
1001 85463 12.9 0.3 455756 129880 ? SN 17:55 5:32 | \_ /usr/bin/python3 /app/bin//gunicorn -c /quay-registry/conf/gunicorn_registry.py registry:application
5th REPRODUCER: SQLite database, Quay running with 8 gunicorn-registry-workers, same image copy setup as in reproducer #2
RESULT: Immediate failure on initial push of test images
- raised exception:
gunicorn-registry stdout | 2025-07-04 16:53:57,258 [266] [ERROR] [gunicorn.error] Error handling request /v2/auth?account=testuser&scope=repository%3Atestuser%2Fmari
adb%3Apull%2Cpush&service=rhel.skynet%3A8443
gunicorn-registry stdout | Traceback (most recent call last):
gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/peewee.py", line 3057, in execute_sql
gunicorn-registry stdout | cursor.execute(sql, params or ())
gunicorn-registry stdout | sqlite3.OperationalError: database is locked
gunicorn-registry stdout | During handling of the above exception, another exception occurred:
gunicorn-registry stdout | Traceback (most recent call last):
gunicorn-registry stdout | File "/quay-registry/data/database.py", line 228, in execute_sql
gunicorn-registry stdout | cursor = super(RetryOperationalError, self).execute_sql(sql, params, commit)
gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/peewee.py", line 3064, in execute_sql
gunicorn-registry stdout | self.commit()
gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/peewee.py", line 2831, in __exit__
gunicorn-registry stdout | reraise(new_type, new_type(exc_value, *exc_args), traceback)
gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/peewee.py", line 183, in reraise
gunicorn-registry stdout | raise value.with_traceback(tb)
gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/peewee.py", line 3057, in execute_sql
gunicorn-registry stdout | cursor.execute(sql, params or ())
gunicorn-registry stdout | peewee.OperationalError: database is locked
...
gunicorn-registry stdout | File "/quay-registry/data/database.py", line 397, in close
gunicorn-registry stdout | ret = super(ObservableDatabase, self).close()
gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/peewee.py", line 3011, in close
gunicorn-registry stdout | raise OperationalError('Attempting to close database while '
gunicorn-registry stdout | peewee.OperationalError: Attempting to close database while transaction is open.
- Quay logs:
[root@rhel ~]# cat quay-app-sqlite-8-workers.log | grep -Po '\"\w+? \/.+?\/.* HTTP\/\d.\d\" \d{3}' | sort | uniq -c | grep -i "\" 5.."
8 "GET /v2/auth?account=testuser&scope=repository%3Atestuser%2Fdrupal%3Apull%2Cpush&service=rhel.skynet%3A8443 HTTP/1.1" 500
4 "GET /v2/auth?account=testuser&scope=repository%3Atestuser%2Fjoomla%3Apull%2Cpush&service=rhel.skynet%3A8443 HTTP/1.1" 500
2 "GET /v2/auth?account=testuser&scope=repository%3Atestuser%2Fmariadb%3Apull%2Cpush&service=rhel.skynet%3A8443 HTTP/1.1" 500
2 "GET /v2/auth?account=testuser&scope=repository%3Atestuser%2Fwordpress%3Apull%2Cpush&service=rhel.skynet%3A8443 HTTP/1.1" 500
1 "PUT /v2/testuser/joomla/blobs/uploads/c40cec61-a7c4-4bbb-a565-0b4ecff0638d?digest=sha256%3Ac7fbcee9efb268dd6a9acec79dd139d174282f0f7f2d35c6fbe16feda9f3daf7 HTTP/1.1" 502
1 "PUT /v2/testuser/wordpress/blobs/uploads/1191ea46-1735-4299-806e-54870dfc1843?digest=sha256%3Afadfb64342a35cd7f0464c60ad01656fd10023d87e4a26c5cfb9d04811ae0fa8 HTTP/1.1" 502
1 "PUT /v2/testuser/wordpress/blobs/uploads/1c813162-6bb9-4c9b-aaf9-7a0459045951?digest=sha256%3Af05433f0219e1fa34a869758515852566ce94551b5f23cfe1384a7a4a7ef7866 HTTP/1.1" 502ž
- raised exceptions:
[root@rhel ~]# cat quay-app-sqlite-8-workers.log | grep -Po "([A-za-z-]*) ([A-Za-z]*?) \| ([a-z]+)\.([A-Za-z\.]*)\:?(.*)?" | sort | uniq -c
6 gunicorn-registry stdout | data.database.ImageStorageDoesNotExist: instance matching query does not exist:
14 gunicorn-registry stdout | peewee.OperationalError: Attempting to close database while transaction is open.
14 gunicorn-registry stdout | peewee.OperationalError: database is locked