-
Bug
-
Resolution: Done
-
Major
-
None
-
False
-
False
-
Quay Enterprise
-
Undefined
-
-
0
Redhat Quay 3.3.x
BloduploadCleanup fails with following stack trace
blobuploadcleanupworker stdout | 2021-02-18 03:24:20,802 [102] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."uuid", "t1"."byte_count", "t1"."sha_state", "t1"."location_id", "t1"."storage_metadata", "t1"."chunk_count", "t1"."uncompressed_byte_count", "t1"."created", "t1"."piece_sha_state", "t1"."piece_hashes", "t2"."id", "t2"."name" FROM "blobupload" AS "t1" INNER JOIN "imagestoragelocation" AS "t2" ON ("t1"."location_id" = "t2"."id") WHERE ("t1"."created" <= %s) LIMIT %s OFFSET %s', [datetime.datetime(2021, 2, 16, 3, 24, 20, 800738), 1, 0]) blobuploadcleanupworker stdout | 2021-02-18 03:24:20,819 [102] [DEBUG] [data.database] Disconnecting from database. 2021-02-18 03:24:20,819 [102] [DEBUG] [util.locking] Releasing lock BLOB_CLEANUP blobuploadcleanupworker stdout | 2021-02-18 03:24:20,821 [102] [DEBUG] [util.locking] Released lock BLOB_CLEANUP blobuploadcleanupworker stdout | 2021-02-18 03:24:20,821 [102] [ERROR] [workers.worker] Operation raised exception Traceback (most recent call last): File "workers/worker.py", line 87, in _operation_func return operation_func() File "/quay-registry/workers/blobuploadcleanupworker/blobuploadcleanupworker.py", line 32, in _try_cleanup_uploads self._cleanup_uploads() File "/quay-registry/workers/blobuploadcleanupworker/blobuploadcleanupworker.py", line 46, in _cleanup_uploads stale_upload = model.get_stale_blob_upload(DELETION_DATE_THRESHOLD) File "workers/blobuploadcleanupworker/models_pre_oci.py", line 13, in get_stale_blob_upload blob_upload = model.blob.get_stale_blob_upload(stale_threshold) File "data/model/blob.py", line 185, in get_stale_blob_upload return candidates.get() File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/peewee.py", line 6665, in get return clone.execute(database)[0] File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/peewee.py", line 4120, in __getitem__ self.fill_cache(item if item > 0 else 0) File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/peewee.py", line 4168, in fill_cache iterator.next() File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/peewee.py", line 4224, in next self.cursor_wrapper.iterate() File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/peewee.py", line 4143, in iterate result = self.process_row(row) File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/peewee.py", line 7345, in process_row value = self.converters[idx](value) File "data/fields.py", line 77, in python_value return base64.b64decode(value) File "/opt/rh/python27/root/usr/lib64/python2.7/base64.py", line 78, in b64decode raise TypeError(msg) TypeError: Incorrect padding blobuploadcleanupworker stdout | 2021-02-18 03:24:20,829 [102] [INFO] [apscheduler.executors.default] Job "_try_cleanup_uploads (trigger: interval[1:00:00], next run at: 2021-02-18 04:24:20 UTC)" executed successfully
Am not certain why would it fail with base64 decode, since all we are trying to do is fetching blobupload older than a specific time set in DELETION_DATE_THRESHOLD
This is not cleaning up any stale jobs, so over a period of time this can grow to large number and can cause issues
Logs are too big to upload however debug logs can be found as part of the case (upload done on 18 Feb )