__ __ / \ / \ ______ _ _ __ __ __ / /\ / /\ \ / __ \ | | | | / \ \ \ / / / / / / \ \ | | | | | | | | / /\ \ \ / \ \ \ \ / / | |__| | | |__| | / ____ \ | | \ \/ \ \/ / \_ ___/ \____/ /_/ \_\ |_| \__/ \__/ \ \__ \___\ by Red Hat Build, Store, and Distribute your Containers Startup timestamp: Fri Feb 14 01:46:44 UTC 2025 Running all default registry services without migration Running init script '/quay-registry/conf/init/certs_install.sh' Installing extra certificates found in /quay-registry/conf/stack/extra_ca_certs directory Running init script '/quay-registry/conf/init/client_certs.sh' Running init script '/quay-registry/conf/init/copy_config_files.sh' Running init script '/quay-registry/conf/init/d_validate_config_bundle.sh' Validating Configuration time="2025-02-14T01:46:44Z" level=debug msg="Validating AccessSettings" time="2025-02-14T01:46:44Z" level=debug msg="Validating ActionLogArchiving" time="2025-02-14T01:46:44Z" level=debug msg="Validating AppTokenAuthentication" time="2025-02-14T01:46:44Z" level=debug msg="Validating AutoPrune" time="2025-02-14T01:46:44Z" level=debug msg="Validating BitbucketBuildTrigger" time="2025-02-14T01:46:44Z" level=debug msg="Validating BuildManager" time="2025-02-14T01:46:44Z" level=debug msg="Validating Database" time="2025-02-14T01:46:44Z" level=debug msg="Scheme: postgresql" time="2025-02-14T01:46:44Z" level=debug msg="Host: quayregistry-quay-database:5432" time="2025-02-14T01:46:44Z" level=debug msg="Db: quayregistry-quay-database" time="2025-02-14T01:46:44Z" level=debug msg="Params: " time="2025-02-14T01:46:44Z" level=debug msg="Including params " time="2025-02-14T01:46:44Z" level=debug msg="Pinging database at hostname: quayregistry-quay-database:5432." time="2025-02-14T01:46:44Z" level=debug msg="Database version: 13.18" plpgsql pg_trgm time="2025-02-14T01:46:44Z" level=debug msg="Validating DistributedStorage" time="2025-02-14T01:46:44Z" level=debug msg="Using IBM Cloud/ODF/RadosGW storage." time="2025-02-14T01:46:44Z" level=debug msg="Storage parameters: " time="2025-02-14T01:46:44Z" level=debug msg="hostname: s3.openshift-storage.svc.cluster.local:443, bucket name: quay-datastore-1a31ad2b-a776-4391-9c8f-f9db73d28315, TLS enabled: true" time="2025-02-14T01:46:44Z" level=debug msg="Validating ElasticSearch" time="2025-02-14T01:46:44Z" level=debug msg="Validating Email" time="2025-02-14T01:46:44Z" level=debug msg="Validating GitHubBuildTrigger" time="2025-02-14T01:46:44Z" level=debug msg="Validating GitHubLogin" time="2025-02-14T01:46:44Z" level=debug msg="Validating GitLabBuildTrigger" time="2025-02-14T01:46:44Z" level=debug msg="Validating GoogleLogin" time="2025-02-14T01:46:44Z" level=debug msg="Validating HostSettings" time="2025-02-14T01:46:44Z" level=debug msg="Validating JWTAuthentication" time="2025-02-14T01:46:44Z" level=debug msg="Validating LDAP" time="2025-02-14T01:46:44Z" level=debug msg="Validating OIDC" time="2025-02-14T01:46:44Z" level=debug msg="Validating QuayDocumentation" time="2025-02-14T01:46:44Z" level=debug msg="Validating Redis" time="2025-02-14T01:46:44Z" level=debug msg="Address: quayregistry-quay-redis:6379" time="2025-02-14T01:46:44Z" level=debug msg="Username: " time="2025-02-14T01:46:44Z" level=debug msg="Password Len: 0" time="2025-02-14T01:46:44Z" level=debug msg="Ssl: " time="2025-02-14T01:46:44Z" level=debug msg="Address: quayregistry-quay-redis:6379" time="2025-02-14T01:46:44Z" level=debug msg="Username: " time="2025-02-14T01:46:44Z" level=debug msg="Password Len: 0" time="2025-02-14T01:46:44Z" level=debug msg="Ssl: " time="2025-02-14T01:46:44Z" level=debug msg="Validating RepoMirror" time="2025-02-14T01:46:44Z" level=debug msg="Validating SecurityScanner" time="2025-02-14T01:46:44Z" level=debug msg="Validating TeamSyncing" time="2025-02-14T01:46:44Z" level=debug msg="Validating TimeMachine" time="2025-02-14T01:46:44Z" level=debug msg="Validating UserVisibleSettings" +------------------------+-------+--------+ | Field Group | Error | Status | +------------------------+-------+--------+ | AccessSettings | - | 🟢 | +------------------------+-------+--------+ | ActionLogArchiving | - | 🟢 | +------------------------+-------+--------+ | AppTokenAuthentication | - | 🟢 | +------------------------+-------+--------+ | AutoPrune | - | 🟢 | +------------------------+-------+--------+ | BitbucketBuildTrigger | - | 🟢 | +------------------------+-------+--------+ | BuildManager | - | 🟢 | +------------------------+-------+--------+ | Database | - | 🟢 | +------------------------+-------+--------+ | DistributedStorage | - | 🟢 | +------------------------+-------+--------+ | ElasticSearch | - | 🟢 | +------------------------+-------+--------+ | Email | - | 🟢 | +------------------------+-------+--------+ | GitHubBuildTrigger | - | 🟢 | +------------------------+-------+--------+ | GitHubLogin | - | 🟢 | +------------------------+-------+--------+ | GitLabBuildTrigger | - | 🟢 | +------------------------+-------+--------+ | GoogleLogin | - | 🟢 | +------------------------+-------+--------+ | HostSettings | - | 🟢 | +------------------------+-------+--------+ | JWTAuthentication | - | 🟢 | +------------------------+-------+--------+ | LDAP | - | 🟢 | +------------------------+-------+--------+ | OIDC | - | 🟢 | +------------------------+-------+--------+ | QuayDocumentation | - | 🟢 | +------------------------+-------+--------+ | Redis | - | 🟢 | +------------------------+-------+--------+ | RepoMirror | - | 🟢 | +------------------------+-------+--------+ | SecurityScanner | - | 🟢 | +------------------------+-------+--------+ | TeamSyncing | - | 🟢 | +------------------------+-------+--------+ | TimeMachine | - | 🟢 | +------------------------+-------+--------+ | UserVisibleSettings | - | 🟢 | +------------------------+-------+--------+ Running init script '/quay-registry/conf/init/nginx_conf_create.sh' Running init script '/quay-registry/conf/init/supervisord_conf_create.sh' Running init script '/quay-registry/conf/init/zz_boot.sh' 2025-02-14 01:46:47,425 INFO RPC interface 'supervisor' initialized 2025-02-14 01:46:47,425 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2025-02-14 01:46:47,425 INFO supervisord started with pid 7 2025-02-14 01:46:48,427 INFO spawned: 'stdout' with pid 55 2025-02-14 01:46:48,428 INFO spawned: 'autopruneworker' with pid 56 2025-02-14 01:46:48,430 INFO spawned: 'blobuploadcleanupworker' with pid 57 2025-02-14 01:46:48,431 INFO spawned: 'builder' with pid 58 2025-02-14 01:46:48,432 INFO spawned: 'buildlogsarchiver' with pid 59 2025-02-14 01:46:48,434 INFO spawned: 'chunkcleanupworker' with pid 60 2025-02-14 01:46:48,435 INFO spawned: 'dnsmasq' with pid 61 2025-02-14 01:46:48,436 INFO spawned: 'expiredappspecifictokenworker' with pid 62 2025-02-14 01:46:48,438 INFO spawned: 'exportactionlogsworker' with pid 63 2025-02-14 01:46:48,440 INFO spawned: 'gcworker' with pid 64 2025-02-14 01:46:48,441 INFO spawned: 'globalpromstats' with pid 65 2025-02-14 01:46:48,443 INFO spawned: 'gunicorn-registry' with pid 66 2025-02-14 01:46:48,444 INFO spawned: 'gunicorn-secscan' with pid 67 2025-02-14 01:46:48,446 INFO spawned: 'gunicorn-web' with pid 68 2025-02-14 01:46:48,447 INFO spawned: 'logrotateworker' with pid 69 2025-02-14 01:46:48,449 INFO spawned: 'manifestbackfillworker' with pid 70 2025-02-14 01:46:48,450 INFO spawned: 'manifestsubjectbackfillworker' with pid 71 2025-02-14 01:46:48,452 INFO spawned: 'memcache' with pid 72 2025-02-14 01:46:48,453 INFO spawned: 'namespacegcworker' with pid 73 2025-02-14 01:46:48,455 INFO spawned: 'nginx' with pid 74 2025-02-14 01:46:48,489 INFO spawned: 'notificationworker' with pid 75 2025-02-14 01:46:48,491 INFO spawned: 'pushgateway' with pid 76 2025-02-14 01:46:48,492 INFO spawned: 'queuecleanupworker' with pid 77 2025-02-14 01:46:48,494 INFO spawned: 'quotaregistrysizeworker' with pid 78 2025-02-14 01:46:48,495 INFO spawned: 'quotatotalworker' with pid 79 2025-02-14 01:46:48,497 INFO spawned: 'reconciliationworker' with pid 80 2025-02-14 01:46:48,500 INFO spawned: 'repositoryactioncounter' with pid 81 2025-02-14 01:46:48,513 INFO spawned: 'repositorygcworker' with pid 85 2025-02-14 01:46:48,514 INFO spawned: 'securityscanningnotificationworker' with pid 87 2025-02-14 01:46:48,516 INFO spawned: 'securityworker' with pid 88 2025-02-14 01:46:48,517 INFO spawned: 'servicekey' with pid 89 2025-02-14 01:46:48,590 INFO spawned: 'storagereplication' with pid 90 2025-02-14 01:46:48,600 INFO spawned: 'teamsyncworker' with pid 92 2025-02-14 01:46:49,796 INFO success: stdout entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,796 INFO success: autopruneworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,796 INFO success: blobuploadcleanupworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,796 INFO success: builder entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,796 INFO success: buildlogsarchiver entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,796 INFO success: chunkcleanupworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,796 INFO success: dnsmasq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: expiredappspecifictokenworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: exportactionlogsworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: gcworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: globalpromstats entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: gunicorn-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: gunicorn-secscan entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: gunicorn-web entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: logrotateworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: manifestbackfillworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: manifestsubjectbackfillworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: memcache entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: namespacegcworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: notificationworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: pushgateway entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: queuecleanupworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: quotaregistrysizeworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: quotatotalworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: reconciliationworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: repositoryactioncounter entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: repositorygcworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: securityscanningnotificationworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: securityworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: servicekey entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: storagereplication entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2025-02-14 01:46:49,797 INFO success: teamsyncworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) dnsmasq stderr | dnsmasq: started, version 2.79 cachesize 150 dnsmasq stderr | dnsmasq: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify dnsmasq stderr | dnsmasq: reading /etc/resolv.conf dnsmasq stderr | dnsmasq: using nameserver 172.30.0.10#53 dnsmasq stderr | dnsmasq: read /etc/hosts - 7 addresses nginx stdout | 2025/02/14 01:46:48 [warn] 74#0: the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /quay-registry/conf/nginx/nginx.conf:40 nginx stdout | 2025/02/14 01:46:48 [warn] 74#0: the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /quay-registry/conf/nginx/nginx.conf:67 nginx stdout | 2025/02/14 01:46:48 [warn] 74#0: the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /quay-registry/conf/nginx/nginx.conf:91 nginx stderr | nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /quay-registry/conf/nginx/nginx.conf:40 nginx stderr | nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /quay-registry/conf/nginx/nginx.conf:67 nginx stderr | nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /quay-registry/conf/nginx/nginx.conf:91 pushgateway stderr | ts=2025-02-14T01:46:48.604Z caller=main.go:86 level=info msg="starting pushgateway" version="(version=, branch=, revision=unknown)" pushgateway stderr | ts=2025-02-14T01:46:48.604Z caller=main.go:87 level=info build_context="(go=go1.19.13 X:strictfipsruntime, platform=linux/amd64, user=, date=, tags=strictfipsruntime)" pushgateway stderr | ts=2025-02-14T01:46:48.605Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9091 pushgateway stderr | ts=2025-02-14T01:46:48.605Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9091 nginx stdout | 2025/02/14 01:46:48 [alert] 102#0: setpriority(-10) failed (13: Permission denied) nginx stdout | 2025/02/14 01:46:48 [alert] 101#0: setpriority(-10) failed (13: Permission denied) nginx stdout | 2025/02/14 01:46:48 [alert] 103#0: setpriority(-10) failed (13: Permission denied) nginx stdout | 2025/02/14 01:46:48 [alert] 104#0: setpriority(-10) failed (13: Permission denied) nginx stdout | 2025/02/14 01:46:48 [alert] 99#0: setpriority(-10) failed (13: Permission denied) nginx stdout | 2025/02/14 01:46:48 [alert] 98#0: setpriority(-10) failed (13: Permission denied) nginx stdout | 2025/02/14 01:46:48 [alert] 100#0: setpriority(-10) failed (13: Permission denied) nginx stdout | 2025/02/14 01:46:48 [alert] 105#0: setpriority(-10) failed (13: Permission denied) nginx stdout | 2025/02/14 01:46:59 [crit] 102#0: *1 connect() to unix:/tmp/gunicorn_web.sock failed (2: No such file or directory) while connecting to upstream, client: 10.129.2.2, server: _, request: "GET /health/instance HTTP/2.0", upstream: "http://unix:/tmp/gunicorn_web.sock:/health/instance", host: "10.129.2.28:8443" nginx stdout | 2025/02/14 01:46:59 [crit] 102#0: *1 connect() to unix:/tmp/gunicorn_web.sock failed (2: No such file or directory) while connecting to upstream, client: 10.129.2.2, server: _, request: "GET /health/instance HTTP/2.0", upstream: "http://unix:/tmp/gunicorn_web.sock:/quay-registry/static/502.html", host: "10.129.2.28:8443" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:46:59 +0000] "GET /health/instance HTTP/2.0" 502 157 "-" "kube-probe/1.30" (0.000 47 0.000 : 0.000) nginx stdout | 2025/02/14 01:47:14 [crit] 101#0: *4 connect() to unix:/tmp/gunicorn_web.sock failed (2: No such file or directory) while connecting to upstream, client: 10.129.2.2, server: _, request: "GET /health/instance HTTP/2.0", upstream: "http://unix:/tmp/gunicorn_web.sock:/health/instance", host: "10.129.2.28:8443" nginx stdout | 2025/02/14 01:47:14 [crit] 101#0: *4 connect() to unix:/tmp/gunicorn_web.sock failed (2: No such file or directory) while connecting to upstream, client: 10.129.2.2, server: _, request: "GET /health/instance HTTP/2.0", upstream: "http://unix:/tmp/gunicorn_web.sock:/quay-registry/static/502.html", host: "10.129.2.28:8443" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:47:14 +0000] "GET /health/instance HTTP/2.0" 502 157 "-" "kube-probe/1.30" (0.000 47 0.000 : 0.000) quotaregistrysizeworker stdout | 2025-02-14 01:47:17,897 [78] [DEBUG] [workers.worker] Scheduling worker. quotaregistrysizeworker stdout | 2025-02-14 01:47:17,907 [78] [INFO] [apscheduler.scheduler] Scheduler started quotaregistrysizeworker stdout | 2025-02-14 01:47:18,008 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:47:18,009 [78] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added quotaregistrysizeworker stdout | 2025-02-14 01:47:18,008 [78] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:46.009738 quotaregistrysizeworker stdout | 2025-02-14 01:47:18,014 [78] [INFO] [apscheduler.scheduler] Added job "QuotaRegistrySizeWorker._calculate_registry_size" to job store "default" quotaregistrysizeworker stdout | 2025-02-14 01:47:18,089 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:47:18,089 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:46.009738+00:00 (in 27.920158 seconds) queuecleanupworker stdout | 2025-02-14 01:47:19,994 [77] [DEBUG] [workers.worker] Scheduling worker. queuecleanupworker stdout | 2025-02-14 01:47:19,994 [77] [INFO] [apscheduler.scheduler] Scheduler started queuecleanupworker stdout | 2025-02-14 01:47:20,014 [77] [DEBUG] [apscheduler.scheduler] Looking for jobs to run queuecleanupworker stdout | 2025-02-14 01:47:20,014 [77] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added queuecleanupworker stdout | 2025-02-14 01:47:20,089 [77] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 21:06:37.090419 queuecleanupworker stdout | 2025-02-14 01:47:20,091 [77] [INFO] [apscheduler.scheduler] Added job "QueueCleanupWorker._cleanup_queue" to job store "default" queuecleanupworker stdout | 2025-02-14 01:47:20,094 [77] [DEBUG] [apscheduler.scheduler] Looking for jobs to run queuecleanupworker stdout | 2025-02-14 01:47:20,094 [77] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 21:06:37.090419+00:00 (in 69556.995793 seconds) teamsyncworker stdout | 2025-02-14 01:47:22,403 [92] [DEBUG] [__main__] Team syncing is disabled; sleeping namespacegcworker stdout | 2025-02-14 01:47:22,492 [73] [DEBUG] [__main__] Starting namespace GC worker namespacegcworker stdout | 2025-02-14 01:47:22,494 [73] [DEBUG] [workers.worker] Scheduling worker. namespacegcworker stdout | 2025-02-14 01:47:22,495 [73] [INFO] [apscheduler.scheduler] Scheduler started namespacegcworker stdout | 2025-02-14 01:47:22,502 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:47:22,502 [73] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added namespacegcworker stdout | 2025-02-14 01:47:22,502 [73] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:45.503718 namespacegcworker stdout | 2025-02-14 01:47:22,504 [73] [INFO] [apscheduler.scheduler] Added job "QueueWorker.poll_queue" to job store "default" namespacegcworker stdout | 2025-02-14 01:47:22,504 [73] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:48:58.505410 namespacegcworker stdout | 2025-02-14 01:47:22,504 [73] [INFO] [apscheduler.scheduler] Added job "QueueWorker.update_queue_metrics" to job store "default" namespacegcworker stdout | 2025-02-14 01:47:22,504 [73] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:48:12.505687 namespacegcworker stdout | 2025-02-14 01:47:22,504 [73] [INFO] [apscheduler.scheduler] Added job "QueueWorker.run_watchdog" to job store "default" namespacegcworker stdout | 2025-02-14 01:47:22,504 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:47:22,504 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:45.503718+00:00 (in 22.998733 seconds) buildlogsarchiver stdout | 2025-02-14 01:47:23,905 [59] [DEBUG] [workers.worker] Scheduling worker. buildlogsarchiver stdout | 2025-02-14 01:47:23,906 [59] [INFO] [apscheduler.scheduler] Scheduler started buildlogsarchiver stdout | 2025-02-14 01:47:23,999 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:47:23,999 [59] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:34.000511 buildlogsarchiver stdout | 2025-02-14 01:47:24,000 [59] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added buildlogsarchiver stdout | 2025-02-14 01:47:24,002 [59] [INFO] [apscheduler.scheduler] Added job "ArchiveBuildLogsWorker._archive_redis_buildlogs" to job store "default" buildlogsarchiver stdout | 2025-02-14 01:47:24,013 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:47:24,013 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:34.000511+00:00 (in 9.987223 seconds) expiredappspecifictokenworker stdout | 2025-02-14 01:47:25,108 [62] [DEBUG] [__main__] Starting expired app specific token GC worker expiredappspecifictokenworker stdout | 2025-02-14 01:47:25,109 [62] [DEBUG] [__main__] Found expiration window: 1d expiredappspecifictokenworker stdout | 2025-02-14 01:47:25,112 [62] [DEBUG] [workers.worker] Scheduling worker. expiredappspecifictokenworker stdout | 2025-02-14 01:47:25,192 [62] [INFO] [apscheduler.scheduler] Scheduler started expiredappspecifictokenworker stdout | 2025-02-14 01:47:25,212 [62] [DEBUG] [apscheduler.scheduler] Looking for jobs to run expiredappspecifictokenworker stdout | 2025-02-14 01:47:25,213 [62] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added expiredappspecifictokenworker stdout | 2025-02-14 01:47:25,212 [62] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 02:29:05.213738 expiredappspecifictokenworker stdout | 2025-02-14 01:47:25,291 [62] [INFO] [apscheduler.scheduler] Added job "ExpiredAppSpecificTokenWorker._gc_expired_tokens" to job store "default" expiredappspecifictokenworker stdout | 2025-02-14 01:47:25,291 [62] [DEBUG] [apscheduler.scheduler] Looking for jobs to run expiredappspecifictokenworker stdout | 2025-02-14 01:47:25,291 [62] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 02:29:05.213738+00:00 (in 2499.922036 seconds) exportactionlogsworker stdout | 2025-02-14 01:47:26,102 [63] [DEBUG] [__main__] Starting export action logs worker exportactionlogsworker stdout | 2025-02-14 01:47:26,198 [63] [DEBUG] [workers.worker] Scheduling worker. exportactionlogsworker stdout | 2025-02-14 01:47:26,200 [63] [INFO] [apscheduler.scheduler] Scheduler started exportactionlogsworker stdout | 2025-02-14 01:47:26,211 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:47:26,211 [63] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:30.212654 exportactionlogsworker stdout | 2025-02-14 01:47:26,213 [63] [INFO] [apscheduler.scheduler] Added job "QueueWorker.poll_queue" to job store "default" exportactionlogsworker stdout | 2025-02-14 01:47:26,214 [63] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:49:39.215004 exportactionlogsworker stdout | 2025-02-14 01:47:26,214 [63] [INFO] [apscheduler.scheduler] Added job "QueueWorker.update_queue_metrics" to job store "default" exportactionlogsworker stdout | 2025-02-14 01:47:26,214 [63] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:48:25.215238 exportactionlogsworker stdout | 2025-02-14 01:47:26,214 [63] [INFO] [apscheduler.scheduler] Added job "QueueWorker.run_watchdog" to job store "default" exportactionlogsworker stdout | 2025-02-14 01:47:26,212 [63] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added exportactionlogsworker stdout | 2025-02-14 01:47:26,290 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:47:26,291 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:30.212654+00:00 (in 3.921464 seconds) notificationworker stdout | 2025-02-14 01:47:26,708 [75] [DEBUG] [workers.worker] Scheduling worker. notificationworker stdout | 2025-02-14 01:47:26,710 [75] [INFO] [apscheduler.scheduler] Scheduler started notificationworker stdout | 2025-02-14 01:47:26,802 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:47:26,802 [75] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:35.803718 notificationworker stdout | 2025-02-14 01:47:26,803 [75] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added notificationworker stdout | 2025-02-14 01:47:26,805 [75] [INFO] [apscheduler.scheduler] Added job "QueueWorker.poll_queue" to job store "default" notificationworker stdout | 2025-02-14 01:47:26,805 [75] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:49:41.806837 notificationworker stdout | 2025-02-14 01:47:26,806 [75] [INFO] [apscheduler.scheduler] Added job "QueueWorker.update_queue_metrics" to job store "default" notificationworker stdout | 2025-02-14 01:47:26,806 [75] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:48:07.807092 notificationworker stdout | 2025-02-14 01:47:26,806 [75] [INFO] [apscheduler.scheduler] Added job "QueueWorker.run_watchdog" to job store "default" notificationworker stdout | 2025-02-14 01:47:26,806 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:47:26,806 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:35.803718+00:00 (in 8.997239 seconds) storagereplication stdout | 2025-02-14 01:47:26,813 [90] [DEBUG] [__main__] Full storage replication disabled; skipping globalpromstats stdout | 2025-02-14 01:47:27,409 [65] [DEBUG] [workers.worker] Scheduling worker. globalpromstats stdout | 2025-02-14 01:47:27,410 [65] [INFO] [apscheduler.scheduler] Scheduler started globalpromstats stdout | 2025-02-14 01:47:27,503 [65] [DEBUG] [apscheduler.scheduler] Looking for jobs to run globalpromstats stdout | 2025-02-14 01:47:27,503 [65] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 02:27:49.504728 globalpromstats stdout | 2025-02-14 01:47:27,504 [65] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added globalpromstats stdout | 2025-02-14 01:47:27,506 [65] [INFO] [apscheduler.scheduler] Added job "GlobalPrometheusStatsWorker._try_report_stats" to job store "default" globalpromstats stdout | 2025-02-14 01:47:27,506 [65] [DEBUG] [apscheduler.scheduler] Looking for jobs to run globalpromstats stdout | 2025-02-14 01:47:27,506 [65] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 02:27:49.504728+00:00 (in 2421.998126 seconds) manifestbackfillworker stdout | 2025-02-14 01:47:28,905 [70] [DEBUG] [workers.worker] Scheduling worker. manifestbackfillworker stdout | 2025-02-14 01:47:28,906 [70] [INFO] [apscheduler.scheduler] Scheduler started manifestbackfillworker stdout | 2025-02-14 01:47:29,002 [70] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestbackfillworker stdout | 2025-02-14 01:47:29,002 [70] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 02:18:31.003707 manifestbackfillworker stdout | 2025-02-14 01:47:29,003 [70] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added manifestbackfillworker stdout | 2025-02-14 01:47:29,005 [70] [INFO] [apscheduler.scheduler] Added job "ManifestBackfillWorker._backfill_manifests" to job store "default" manifestbackfillworker stdout | 2025-02-14 01:47:29,006 [70] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestbackfillworker stdout | 2025-02-14 01:47:29,006 [70] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 02:18:31.003707+00:00 (in 1861.997554 seconds) nginx stdout | 2025/02/14 01:47:29 [crit] 103#0: *7 connect() to unix:/tmp/gunicorn_web.sock failed (2: No such file or directory) while connecting to upstream, client: 10.129.2.2, server: _, request: "GET /health/instance HTTP/2.0", upstream: "http://unix:/tmp/gunicorn_web.sock:/health/instance", host: "10.129.2.28:8443" nginx stdout | 2025/02/14 01:47:29 [crit] 103#0: *7 connect() to unix:/tmp/gunicorn_web.sock failed (2: No such file or directory) while connecting to upstream, client: 10.129.2.2, server: _, request: "GET /health/instance HTTP/2.0", upstream: "http://unix:/tmp/gunicorn_web.sock:/quay-registry/static/502.html", host: "10.129.2.28:8443" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:47:29 +0000] "GET /health/instance HTTP/2.0" 502 157 "-" "kube-probe/1.30" (0.000 47 0.000 : 0.000) servicekey stdout | 2025-02-14 01:47:29,013 [89] [DEBUG] [workers.worker] Scheduling worker. servicekey stdout | 2025-02-14 01:47:29,101 [89] [INFO] [apscheduler.scheduler] Scheduler started servicekey stdout | 2025-02-14 01:47:29,111 [89] [DEBUG] [apscheduler.scheduler] Looking for jobs to run servicekey stdout | 2025-02-14 01:47:29,111 [89] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 02:34:39.112675 servicekey stdout | 2025-02-14 01:47:29,113 [89] [INFO] [apscheduler.scheduler] Added job "ServiceKeyWorker._refresh_service_key" to job store "default" servicekey stdout | 2025-02-14 01:47:29,112 [89] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added servicekey stdout | 2025-02-14 01:47:29,190 [89] [DEBUG] [apscheduler.scheduler] Looking for jobs to run servicekey stdout | 2025-02-14 01:47:29,190 [89] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 02:34:39.112675+00:00 (in 2829.921862 seconds) builder stderr | /app/lib/python3.9/site-packages/google/protobuf/runtime_version.py:112: UserWarning: Protobuf gencode version 5.27.2 is older than the runtime version 5.28.2 at buildman.proto. Please avoid checked-in Protobuf gencode that can be obsolete. builder stderr | warnings.warn( logrotateworker stdout | 2025-02-14 01:47:29,212 [69] [DEBUG] [__main__] Action log rotation worker not enabled; skipping builder stdout | 2025-02-14 01:47:29,311 [58] [DEBUG] [__main__] Building is disabled. Please enable the feature flag blobuploadcleanupworker stdout | 2025-02-14 01:47:29,903 [57] [DEBUG] [workers.worker] Scheduling worker. blobuploadcleanupworker stdout | 2025-02-14 01:47:29,905 [57] [INFO] [apscheduler.scheduler] Scheduler started repositorygcworker stdout | 2025-02-14 01:47:29,990 [85] [DEBUG] [__main__] Starting repository GC worker repositorygcworker stdout | 2025-02-14 01:47:29,993 [85] [DEBUG] [workers.worker] Scheduling worker. blobuploadcleanupworker stdout | 2025-02-14 01:47:29,993 [57] [DEBUG] [apscheduler.scheduler] Looking for jobs to run blobuploadcleanupworker stdout | 2025-02-14 01:47:29,993 [57] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 02:38:52.994545 repositorygcworker stdout | 2025-02-14 01:47:29,995 [85] [INFO] [apscheduler.scheduler] Scheduler started blobuploadcleanupworker stdout | 2025-02-14 01:47:29,995 [57] [INFO] [apscheduler.scheduler] Added job "BlobUploadCleanupWorker._try_cleanup_uploads" to job store "default" blobuploadcleanupworker stdout | 2025-02-14 01:47:29,994 [57] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added blobuploadcleanupworker stdout | 2025-02-14 01:47:29,995 [57] [DEBUG] [apscheduler.scheduler] Looking for jobs to run blobuploadcleanupworker stdout | 2025-02-14 01:47:29,995 [57] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 02:38:52.994545+00:00 (in 3082.998724 seconds) repositorygcworker stdout | 2025-02-14 01:47:30,010 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:47:30,010 [85] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:33.011632 repositorygcworker stdout | 2025-02-14 01:47:30,011 [85] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added repositorygcworker stdout | 2025-02-14 01:47:30,013 [85] [INFO] [apscheduler.scheduler] Added job "QueueWorker.poll_queue" to job store "default" repositorygcworker stdout | 2025-02-14 01:47:30,013 [85] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:49:43.014615 repositorygcworker stdout | 2025-02-14 01:47:30,013 [85] [INFO] [apscheduler.scheduler] Added job "QueueWorker.update_queue_metrics" to job store "default" repositorygcworker stdout | 2025-02-14 01:47:30,013 [85] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:36.014770 repositorygcworker stdout | 2025-02-14 01:47:30,013 [85] [INFO] [apscheduler.scheduler] Added job "QueueWorker.run_watchdog" to job store "default" repositorygcworker stdout | 2025-02-14 01:47:30,090 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:47:30,090 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:33.011632+00:00 (in 2.921290 seconds) repositoryactioncounter stdout | 2025-02-14 01:47:30,097 [81] [DEBUG] [workers.worker] Scheduling worker. repositoryactioncounter stdout | 2025-02-14 01:47:30,099 [81] [INFO] [apscheduler.scheduler] Scheduler started repositoryactioncounter stdout | 2025-02-14 01:47:30,106 [81] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositoryactioncounter stdout | 2025-02-14 01:47:30,106 [81] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 05:10:22.107650 repositoryactioncounter stdout | 2025-02-14 01:47:30,107 [81] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added repositoryactioncounter stdout | 2025-02-14 01:47:30,108 [81] [INFO] [apscheduler.scheduler] Added job "RepositoryActionCountWorker._run_counting" to job store "default" repositoryactioncounter stdout | 2025-02-14 01:47:30,192 [81] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositoryactioncounter stdout | 2025-02-14 01:47:30,193 [81] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 05:10:22.107650+00:00 (in 12171.914427 seconds) exportactionlogsworker stdout | 2025-02-14 01:47:30,213 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:47:30,298 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:47:30 UTC)" (scheduled at 2025-02-14 01:47:30.212654+00:00) exportactionlogsworker stdout | 2025-02-14 01:47:30,298 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:25.215238+00:00 (in 54.916727 seconds) autopruneworker stdout | 2025-02-14 01:47:30,306 [56] [DEBUG] [workers.worker] Scheduling worker. autopruneworker stdout | 2025-02-14 01:47:30,307 [56] [INFO] [apscheduler.scheduler] Scheduler started gcworker stdout | 2025-02-14 01:47:30,307 [64] [DEBUG] [workers.worker] Scheduling worker. gcworker stdout | 2025-02-14 01:47:30,307 [64] [INFO] [apscheduler.scheduler] Scheduler started exportactionlogsworker stdout | 2025-02-14 01:47:30,299 [63] [DEBUG] [workers.queueworker] Getting work item from queue. autopruneworker stdout | 2025-02-14 01:47:30,309 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:47:30,309 [63] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 47, 30, 308180), True, datetime.datetime(2025, 2, 14, 1, 47, 30, 308180), 0, 'exportactionlogs/%', 50, 1, 0]) autopruneworker stdout | 2025-02-14 01:47:30,309 [56] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:52.310342 autopruneworker stdout | 2025-02-14 01:47:30,312 [56] [INFO] [apscheduler.scheduler] Added job "AutoPruneWorker.prune" to job store "default" autopruneworker stdout | 2025-02-14 01:47:30,310 [56] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added autopruneworker stdout | 2025-02-14 01:47:30,313 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:47:30,313 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:52.310342+00:00 (in 21.996790 seconds) gcworker stdout | 2025-02-14 01:47:30,313 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:47:30,313 [64] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added gcworker stdout | 2025-02-14 01:47:30,389 [64] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:54.390410 gcworker stdout | 2025-02-14 01:47:30,391 [64] [INFO] [apscheduler.scheduler] Added job "GarbageCollectionWorker._garbage_collection_repos" to job store "default" gcworker stdout | 2025-02-14 01:47:30,391 [64] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:55.392556 gcworker stdout | 2025-02-14 01:47:30,391 [64] [INFO] [apscheduler.scheduler] Added job "GarbageCollectionWorker._scan_notifications" to job store "default" gcworker stdout | 2025-02-14 01:47:30,396 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:47:30,396 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:54.390410+00:00 (in 23.993934 seconds) exportactionlogsworker stdout | 2025-02-14 01:47:30,398 [63] [DEBUG] [workers.queueworker] No more work. exportactionlogsworker stdout | 2025-02-14 01:47:30,398 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:47:30,398 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:48:30 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:47:31,197 [68] [DEBUG] [app] Loading default config. gunicorn-web stdout | 2025-02-14 01:47:31,198 [68] [DEBUG] [util.config.provider.basefileprovider] Applying config file: /quay-registry/conf/stack/config.yaml gunicorn-web stdout | 2025-02-14 01:47:31,212 [68] [DEBUG] [app] Loaded config gunicorn-web stdout | 2025-02-14 01:47:31,213 [68] [INFO] [util.ipresolver] Loading AWS IP ranges from disk gunicorn-web stdout | 2025-02-14 01:47:31,293 [68] [DEBUG] [util.ipresolver] Building AWS IP ranges gunicorn-web stdout | 2025-02-14 01:47:31,795 [68] [DEBUG] [util.ipresolver] Finished building AWS IP ranges gunicorn-web stdout | 2025-02-14 01:47:31,798 [68] [DEBUG] [botocore.hooks] Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane gunicorn-web stdout | 2025-02-14 01:47:31,801 [68] [DEBUG] [botocore.hooks] Changing event name from before-call.apigateway to before-call.api-gateway gunicorn-web stdout | 2025-02-14 01:47:31,803 [68] [DEBUG] [botocore.hooks] Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict gunicorn-web stdout | 2025-02-14 01:47:31,805 [68] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration gunicorn-web stdout | 2025-02-14 01:47:31,806 [68] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 gunicorn-web stdout | 2025-02-14 01:47:31,807 [68] [DEBUG] [botocore.hooks] Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search gunicorn-web stdout | 2025-02-14 01:47:31,809 [68] [DEBUG] [botocore.hooks] Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section gunicorn-web stdout | 2025-02-14 01:47:31,812 [68] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask gunicorn-web stdout | 2025-02-14 01:47:31,813 [68] [DEBUG] [botocore.hooks] Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section gunicorn-web stdout | 2025-02-14 01:47:31,813 [68] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search gunicorn-web stdout | 2025-02-14 01:47:31,813 [68] [DEBUG] [botocore.hooks] Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section chunkcleanupworker stdout | 2025-02-14 01:47:32,206 [60] [DEBUG] [__main__] Swift storage not detected; sleeping gunicorn-web stdout | 2025-02-14 01:47:32,305 [68] [DEBUG] [data.database] Configuring database gunicorn-web stdout | 2025-02-14 01:47:32,307 [68] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:47:32,307 [68] [INFO] [data.secscan_model] =============================== gunicorn-web stdout | 2025-02-14 01:47:32,307 [68] [INFO] [data.secscan_model] Using split secscan model: `[]` gunicorn-web stdout | 2025-02-14 01:47:32,307 [68] [INFO] [data.secscan_model] =============================== gunicorn-web stdout | 2025-02-14 01:47:32,308 [68] [DEBUG] [data.logs_model] Configuring log model gunicorn-web stdout | 2025-02-14 01:47:32,308 [68] [INFO] [data.logs_model] =============================== gunicorn-web stdout | 2025-02-14 01:47:32,308 [68] [INFO] [data.logs_model] Using logs model `` gunicorn-web stdout | 2025-02-14 01:47:32,308 [68] [INFO] [data.logs_model] =============================== manifestsubjectbackfillworker stdout | 2025-02-14 01:47:32,896 [71] [DEBUG] [workers.worker] Scheduling worker. manifestsubjectbackfillworker stdout | 2025-02-14 01:47:32,897 [71] [INFO] [apscheduler.scheduler] Scheduler started manifestsubjectbackfillworker stdout | 2025-02-14 01:47:32,897 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:47:32,897 [71] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added manifestsubjectbackfillworker stdout | 2025-02-14 01:47:32,897 [71] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:48:05.898886 manifestsubjectbackfillworker stdout | 2025-02-14 01:47:32,899 [71] [INFO] [apscheduler.scheduler] Added job "ManifestSubjectBackfillWorker._backfill_manifest_subject" to job store "default" manifestsubjectbackfillworker stdout | 2025-02-14 01:47:32,899 [71] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:52.900596 manifestsubjectbackfillworker stdout | 2025-02-14 01:47:32,899 [71] [INFO] [apscheduler.scheduler] Added job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type" to job store "default" manifestsubjectbackfillworker stdout | 2025-02-14 01:47:32,899 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:47:32,900 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:52.900596+00:00 (in 20.000596 seconds) repositorygcworker stdout | 2025-02-14 01:47:33,011 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:47:33,012 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:47:33 UTC)" (scheduled at 2025-02-14 01:47:33.011632+00:00) repositorygcworker stdout | 2025-02-14 01:47:33,012 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:36.014770+00:00 (in 3.002340 seconds) repositorygcworker stdout | 2025-02-14 01:47:33,012 [85] [DEBUG] [workers.queueworker] Getting work item from queue. repositorygcworker stdout | 2025-02-14 01:47:33,013 [85] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 47, 33, 12610), True, datetime.datetime(2025, 2, 14, 1, 47, 33, 12610), 0, 'repositorygc/%', 50, 1, 0]) repositorygcworker stdout | 2025-02-14 01:47:33,023 [85] [DEBUG] [workers.queueworker] No more work. repositorygcworker stdout | 2025-02-14 01:47:33,023 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:47:33,023 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:48:33 UTC)" executed successfully gunicorn-secscan stdout | 2025-02-14 01:47:33,112 [67] [DEBUG] [app] Loading default config. gunicorn-secscan stdout | 2025-02-14 01:47:33,112 [67] [DEBUG] [util.config.provider.basefileprovider] Applying config file: /quay-registry/conf/stack/config.yaml securityscanningnotificationworker stdout | 2025-02-14 01:47:33,119 [87] [DEBUG] [__main__] Starting security scanning notification worker gunicorn-secscan stdout | 2025-02-14 01:47:33,119 [67] [DEBUG] [app] Loaded config gunicorn-secscan stdout | 2025-02-14 01:47:33,120 [67] [INFO] [util.ipresolver] Loading AWS IP ranges from disk securityscanningnotificationworker stdout | 2025-02-14 01:47:33,121 [87] [DEBUG] [workers.worker] Scheduling worker. securityscanningnotificationworker stdout | 2025-02-14 01:47:33,121 [87] [INFO] [apscheduler.scheduler] Scheduler started securityscanningnotificationworker stdout | 2025-02-14 01:47:33,122 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:47:33,122 [87] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added securityscanningnotificationworker stdout | 2025-02-14 01:47:33,122 [87] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:59.123196 securityscanningnotificationworker stdout | 2025-02-14 01:47:33,123 [87] [INFO] [apscheduler.scheduler] Added job "QueueWorker.poll_queue" to job store "default" securityscanningnotificationworker stdout | 2025-02-14 01:47:33,123 [87] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:50:20.124914 securityscanningnotificationworker stdout | 2025-02-14 01:47:33,124 [87] [INFO] [apscheduler.scheduler] Added job "QueueWorker.update_queue_metrics" to job store "default" securityscanningnotificationworker stdout | 2025-02-14 01:47:33,124 [87] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:48.125163 securityscanningnotificationworker stdout | 2025-02-14 01:47:33,124 [87] [INFO] [apscheduler.scheduler] Added job "QueueWorker.run_watchdog" to job store "default" securityscanningnotificationworker stdout | 2025-02-14 01:47:33,124 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:47:33,124 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:48.125163+00:00 (in 15.000586 seconds) gunicorn-secscan stdout | 2025-02-14 01:47:33,128 [67] [DEBUG] [util.ipresolver] Building AWS IP ranges securityworker stdout | 2025-02-14 01:47:33,229 [88] [DEBUG] [workers.worker] Scheduling worker. securityworker stdout | 2025-02-14 01:47:33,229 [88] [INFO] [apscheduler.scheduler] Scheduler started securityworker stdout | 2025-02-14 01:47:33,230 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:47:33,230 [88] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added securityworker stdout | 2025-02-14 01:47:33,230 [88] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:54.231161 securityworker stdout | 2025-02-14 01:47:33,231 [88] [INFO] [apscheduler.scheduler] Added job "SecurityWorker._index_in_scanner" to job store "default" securityworker stdout | 2025-02-14 01:47:33,231 [88] [DEBUG] [workers.worker] First run scheduled for 2025-02-14 01:47:59.232325 securityworker stdout | 2025-02-14 01:47:33,231 [88] [INFO] [apscheduler.scheduler] Added job "SecurityWorker._index_recent_manifests_in_scanner" to job store "default" securityworker stdout | 2025-02-14 01:47:33,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:47:33,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:54.231161+00:00 (in 20.999547 seconds) gunicorn-secscan stdout | 2025-02-14 01:47:33,304 [67] [DEBUG] [util.ipresolver] Finished building AWS IP ranges gunicorn-secscan stdout | 2025-02-14 01:47:33,305 [67] [DEBUG] [botocore.hooks] Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane gunicorn-secscan stdout | 2025-02-14 01:47:33,307 [67] [DEBUG] [botocore.hooks] Changing event name from before-call.apigateway to before-call.api-gateway gunicorn-secscan stdout | 2025-02-14 01:47:33,308 [67] [DEBUG] [botocore.hooks] Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict gunicorn-secscan stdout | 2025-02-14 01:47:33,309 [67] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration gunicorn-secscan stdout | 2025-02-14 01:47:33,309 [67] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 gunicorn-secscan stdout | 2025-02-14 01:47:33,310 [67] [DEBUG] [botocore.hooks] Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search gunicorn-secscan stdout | 2025-02-14 01:47:33,310 [67] [DEBUG] [botocore.hooks] Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section gunicorn-secscan stdout | 2025-02-14 01:47:33,312 [67] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask gunicorn-secscan stdout | 2025-02-14 01:47:33,312 [67] [DEBUG] [botocore.hooks] Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section gunicorn-secscan stdout | 2025-02-14 01:47:33,312 [67] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search gunicorn-secscan stdout | 2025-02-14 01:47:33,312 [67] [DEBUG] [botocore.hooks] Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section gunicorn-secscan stdout | 2025-02-14 01:47:33,343 [67] [DEBUG] [data.database] Configuring database gunicorn-secscan stdout | 2025-02-14 01:47:33,344 [67] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-secscan stdout | 2025-02-14 01:47:33,344 [67] [INFO] [data.secscan_model] =============================== gunicorn-secscan stdout | 2025-02-14 01:47:33,344 [67] [INFO] [data.secscan_model] Using split secscan model: `[]` gunicorn-secscan stdout | 2025-02-14 01:47:33,344 [67] [INFO] [data.secscan_model] =============================== gunicorn-secscan stdout | 2025-02-14 01:47:33,345 [67] [DEBUG] [data.logs_model] Configuring log model gunicorn-secscan stdout | 2025-02-14 01:47:33,345 [67] [INFO] [data.logs_model] =============================== gunicorn-secscan stdout | 2025-02-14 01:47:33,345 [67] [INFO] [data.logs_model] Using logs model `` gunicorn-secscan stdout | 2025-02-14 01:47:33,345 [67] [INFO] [data.logs_model] =============================== gunicorn-secscan stdout | 2025-02-14 01:47:33,591 [67] [DEBUG] [__config__] Starting secscan gunicorn with 2 workers and gevent worker class gunicorn-secscan stderr | Traceback (most recent call last): gunicorn-secscan stderr | File "src/gevent/_abstract_linkable.py", line 287, in gevent._gevent_c_abstract_linkable.AbstractLinkable._notify_links gunicorn-secscan stderr | File "src/gevent/_abstract_linkable.py", line 333, in gevent._gevent_c_abstract_linkable.AbstractLinkable._notify_links gunicorn-secscan stderr | AssertionError: (None, ) gunicorn-secscan stderr | 2025-02-14T01:47:33Z failed with AssertionError buildlogsarchiver stdout | 2025-02-14 01:47:34,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:47:34,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:47:34 UTC)" (scheduled at 2025-02-14 01:47:34.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:47:34,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 47, 34, 1293), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:47:34,002 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:04.000511+00:00 (in 29.998441 seconds) buildlogsarchiver stdout | 2025-02-14 01:47:34,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:47:34,012 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:47:34,012 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:04 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:47:34,175 [68] [DEBUG] [__config__] Starting web gunicorn with 4 workers and gevent worker class gunicorn-web stderr | Traceback (most recent call last): gunicorn-web stderr | File "src/gevent/_abstract_linkable.py", line 287, in gevent._gevent_c_abstract_linkable.AbstractLinkable._notify_links gunicorn-web stderr | File "src/gevent/_abstract_linkable.py", line 333, in gevent._gevent_c_abstract_linkable.AbstractLinkable._notify_links gunicorn-web stderr | AssertionError: (None, ) gunicorn-web stderr | 2025-02-14T01:47:34Z failed with AssertionError gunicorn-registry stdout | 2025-02-14 01:47:34,376 [66] [DEBUG] [app] Loading default config. gunicorn-registry stdout | 2025-02-14 01:47:34,377 [66] [DEBUG] [util.config.provider.basefileprovider] Applying config file: /quay-registry/conf/stack/config.yaml gunicorn-registry stdout | 2025-02-14 01:47:34,383 [66] [DEBUG] [app] Loaded config gunicorn-registry stdout | 2025-02-14 01:47:34,383 [66] [INFO] [util.ipresolver] Loading AWS IP ranges from disk gunicorn-registry stdout | 2025-02-14 01:47:34,390 [66] [DEBUG] [util.ipresolver] Building AWS IP ranges gunicorn-registry stdout | 2025-02-14 01:47:34,461 [66] [DEBUG] [util.ipresolver] Finished building AWS IP ranges gunicorn-registry stdout | 2025-02-14 01:47:34,462 [66] [DEBUG] [botocore.hooks] Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane gunicorn-registry stdout | 2025-02-14 01:47:34,463 [66] [DEBUG] [botocore.hooks] Changing event name from before-call.apigateway to before-call.api-gateway gunicorn-registry stdout | 2025-02-14 01:47:34,464 [66] [DEBUG] [botocore.hooks] Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict gunicorn-registry stdout | 2025-02-14 01:47:34,465 [66] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration gunicorn-registry stdout | 2025-02-14 01:47:34,465 [66] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 gunicorn-registry stdout | 2025-02-14 01:47:34,465 [66] [DEBUG] [botocore.hooks] Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search gunicorn-registry stdout | 2025-02-14 01:47:34,466 [66] [DEBUG] [botocore.hooks] Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section gunicorn-registry stdout | 2025-02-14 01:47:34,467 [66] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask gunicorn-registry stdout | 2025-02-14 01:47:34,467 [66] [DEBUG] [botocore.hooks] Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section gunicorn-registry stdout | 2025-02-14 01:47:34,467 [66] [DEBUG] [botocore.hooks] Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search gunicorn-registry stdout | 2025-02-14 01:47:34,467 [66] [DEBUG] [botocore.hooks] Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section gunicorn-registry stdout | 2025-02-14 01:47:34,484 [66] [DEBUG] [data.database] Configuring database gunicorn-registry stdout | 2025-02-14 01:47:34,484 [66] [INFO] [data.database] Connection pooling enabled for postgresql; stale timeout: None; max connection count: None gunicorn-registry stdout | 2025-02-14 01:47:34,485 [66] [INFO] [data.secscan_model] =============================== gunicorn-registry stdout | 2025-02-14 01:47:34,485 [66] [INFO] [data.secscan_model] Using split secscan model: `[]` gunicorn-registry stdout | 2025-02-14 01:47:34,485 [66] [INFO] [data.secscan_model] =============================== gunicorn-registry stdout | 2025-02-14 01:47:34,485 [66] [DEBUG] [data.logs_model] Configuring log model gunicorn-registry stdout | 2025-02-14 01:47:34,485 [66] [INFO] [data.logs_model] =============================== gunicorn-registry stdout | 2025-02-14 01:47:34,485 [66] [INFO] [data.logs_model] Using logs model `` gunicorn-registry stdout | 2025-02-14 01:47:34,485 [66] [INFO] [data.logs_model] =============================== gunicorn-registry stdout | 2025-02-14 01:47:35,055 [66] [DEBUG] [__config__] Starting registry gunicorn with 8 workers and gevent worker class gunicorn-registry stderr | Traceback (most recent call last): gunicorn-registry stderr | File "src/gevent/_abstract_linkable.py", line 287, in gevent._gevent_c_abstract_linkable.AbstractLinkable._notify_links gunicorn-registry stderr | File "src/gevent/_abstract_linkable.py", line 333, in gevent._gevent_c_abstract_linkable.AbstractLinkable._notify_links gunicorn-registry stderr | AssertionError: (None, ) gunicorn-registry stderr | 2025-02-14T01:47:35Z failed with AssertionError notificationworker stdout | 2025-02-14 01:47:35,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:47:35,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:47:35 UTC)" (scheduled at 2025-02-14 01:47:35.803718+00:00) notificationworker stdout | 2025-02-14 01:47:35,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:47:35,806 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 47, 35, 804581), True, datetime.datetime(2025, 2, 14, 1, 47, 35, 804581), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:47:35,806 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:45.803718+00:00 (in 9.997183 seconds) notificationworker stdout | 2025-02-14 01:47:35,815 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:47:35,816 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:47:35,816 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:47:45 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:47:36,015 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:47:36,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:33.011632+00:00 (in 56.996345 seconds) repositorygcworker stdout | 2025-02-14 01:47:36,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:48:36 UTC)" (scheduled at 2025-02-14 01:47:36.014770+00:00) repositorygcworker stdout | 2025-02-14 01:47:36,015 [85] [DEBUG] [workers.queueworker] Running watchdog. repositorygcworker stdout | 2025-02-14 01:47:36,015 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:48:36 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:47:44,010 [242] [DEBUG] [app] Starting request: urn:request:feadde90-c6b5-45ed-a829-cd88c84060b9 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:47:44,015 [242] [DEBUG] [urllib3.connectionpool] Starting new HTTPS connection (1): localhost:8443 gunicorn-web stdout | 2025-02-14 01:47:44,040 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:47:44,043 [246] [DEBUG] [app] Starting request: urn:request:6473bca5-dc0e-431f-81d9-0c298f002ed1 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:47:44,044 [246] [DEBUG] [app] Ending request: urn:request:6473bca5-dc0e-431f-81d9-0c298f002ed1 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:6473bca5-dc0e-431f-81d9-0c298f002ed1', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:47:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.004 162 0.004) gunicorn-registry stdout | 2025-02-14 01:47:44,045 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:47:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:47:44,045 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:47:44,046 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:47:44,049 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:47:44,049 [242] [DEBUG] [app] Starting request: urn:request:589210ad-ed3a-4993-b80e-40068f628b9b (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:47:44,050 [242] [DEBUG] [app] Ending request: urn:request:589210ad-ed3a-4993-b80e-40068f628b9b (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:589210ad-ed3a-4993-b80e-40068f628b9b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:47:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:47:44,051 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:47:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:47:44,051 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:47:44,053 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name", "t1"."kid", "t1"."service", "t1"."jwk", "t1"."metadata", "t1"."created_date", "t1"."expiration_date", "t1"."rotation_duration", "t1"."approval_id" FROM "servicekey" AS "t1" LEFT OUTER JOIN "servicekeyapproval" AS "t2" ON ("t1"."approval_id" = "t2"."id") WHERE ((((NOT ("t1"."approval_id" IS %s) AND (("t1"."expiration_date" > %s) OR ("t1"."expiration_date" IS %s))) AND ("t1"."service" = %s)) AND (NOT (("t1"."service" = %s) AND ("t1"."expiration_date" <= %s)) OR NOT ((("t1"."service" = %s) AND ("t1"."approval_id" IS %s)) AND ("t1"."created_date" <= %s)))) AND (NOT ("t1"."expiration_date" <= %s) OR ("t1"."expiration_date" IS %s)))', [None, datetime.datetime(2025, 2, 14, 1, 47, 44, 52186), None, 'quay', 'quay', datetime.datetime(2025, 2, 14, 1, 47, 44, 52206), 'quay', None, datetime.datetime(2025, 2, 13, 1, 47, 44, 52220), datetime.datetime(2025, 2, 7, 1, 47, 44, 52227), None]) gunicorn-web stdout | 2025-02-14 01:47:44,062 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:47:44,063 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:47:44,069 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:47:44,069 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:47:44,071 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:47:44,073 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:47:44,076 [242] [DEBUG] [app] Ending request: urn:request:feadde90-c6b5-45ed-a829-cd88c84060b9 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:feadde90-c6b5-45ed-a829-cd88c84060b9', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:47:44,076 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:47:44,076 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:47:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:47:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.070 47 0.070) gunicorn-web stdout | 2025-02-14 01:47:44,080 [242] [DEBUG] [app] Starting request: urn:request:1f63542c-4d16-4fdf-bc8e-febfc772e74f (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:47:44,081 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:47:44,082 [246] [DEBUG] [app] Starting request: urn:request:cf0b21bb-7d89-4f80-b10e-841fbc22bd37 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:47:44,083 [246] [DEBUG] [app] Ending request: urn:request:cf0b21bb-7d89-4f80-b10e-841fbc22bd37 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:cf0b21bb-7d89-4f80-b10e-841fbc22bd37', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:47:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-registry stdout | 2025-02-14 01:47:44,083 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:47:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:47:44,083 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:47:44,085 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:47:44,087 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:47:44,091 [244] [DEBUG] [app] Starting request: urn:request:ac74a182-313a-4f2a-9459-a7beb4e0ad7a (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:47:44,092 [244] [DEBUG] [app] Ending request: urn:request:ac74a182-313a-4f2a-9459-a7beb4e0ad7a (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:ac74a182-313a-4f2a-9459-a7beb4e0ad7a', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:47:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.005 159 0.005) gunicorn-web stdout | 2025-02-14 01:47:44,092 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:47:44,092 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:47:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:47:44,093 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:47:44,093 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:47:44,098 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:47:44,098 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:47:44,104 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:47:44,107 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:47:44,109 [242] [DEBUG] [app] Ending request: urn:request:1f63542c-4d16-4fdf-bc8e-febfc772e74f (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:1f63542c-4d16-4fdf-bc8e-febfc772e74f', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:47:44,109 [242] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:47:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.030) gunicorn-web stdout | 2025-02-14 01:47:44,110 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:47:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" exportactionlogsworker stdout | 2025-02-14 01:47:44,445 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:47:44,541 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} namespacegcworker stdout | 2025-02-14 01:47:45,503 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:47:45,504 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:47:45 UTC)" (scheduled at 2025-02-14 01:47:45.503718+00:00) namespacegcworker stdout | 2025-02-14 01:47:45,504 [73] [DEBUG] [workers.queueworker] Getting work item from queue. namespacegcworker stdout | 2025-02-14 01:47:45,505 [73] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 47, 45, 504490), True, datetime.datetime(2025, 2, 14, 1, 47, 45, 504490), 0, 'namespacegc/%', 50, 1, 0]) namespacegcworker stdout | 2025-02-14 01:47:45,505 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:12.505687+00:00 (in 27.000121 seconds) namespacegcworker stdout | 2025-02-14 01:47:45,515 [73] [DEBUG] [workers.queueworker] No more work. namespacegcworker stdout | 2025-02-14 01:47:45,515 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:47:45,515 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:48:45 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:47:45,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:47:45,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:55.803718+00:00 (in 9.999462 seconds) notificationworker stdout | 2025-02-14 01:47:45,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:47:55 UTC)" (scheduled at 2025-02-14 01:47:45.803718+00:00) notificationworker stdout | 2025-02-14 01:47:45,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:47:45,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 47, 45, 804573), True, datetime.datetime(2025, 2, 14, 1, 47, 45, 804573), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:47:45,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:47:45,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:47:45,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:47:55 UTC)" executed successfully quotaregistrysizeworker stdout | 2025-02-14 01:47:46,009 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:47:46,010 [78] [INFO] [apscheduler.executors.default] Running job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:47:46 UTC)" (scheduled at 2025-02-14 01:47:46.009738+00:00) quotaregistrysizeworker stdout | 2025-02-14 01:47:46,011 [78] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."size_bytes", "t1"."running", "t1"."queued", "t1"."completed_ms" FROM "quotaregistrysize" AS "t1" LIMIT %s OFFSET %s', [1, 0]) quotaregistrysizeworker stdout | 2025-02-14 01:47:46,011 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:46.009738+00:00 (in 59.998318 seconds) quotaregistrysizeworker stdout | 2025-02-14 01:47:46,019 [78] [DEBUG] [data.database] Disconnecting from database. quotaregistrysizeworker stdout | 2025-02-14 01:47:46,019 [78] [INFO] [apscheduler.executors.default] Job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:48:46 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:47:46,628 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:47:48,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:47:48,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:47:48 UTC)" (scheduled at 2025-02-14 01:47:48.125163+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:47:48,125 [87] [DEBUG] [workers.queueworker] Running watchdog. securityscanningnotificationworker stdout | 2025-02-14 01:47:48,126 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:47:48 UTC)" executed successfully securityscanningnotificationworker stdout | 2025-02-14 01:47:48,126 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:59.123196+00:00 (in 10.996932 seconds) namespacegcworker stdout | 2025-02-14 01:47:50,005 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:47:50,301 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} expiredappspecifictokenworker stdout | 2025-02-14 01:47:52,120 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} autopruneworker stdout | 2025-02-14 01:47:52,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:47:52,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:47:52 UTC)" (scheduled at 2025-02-14 01:47:52.310342+00:00) autopruneworker stdout | 2025-02-14 01:47:52,311 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:22.310342+00:00 (in 29.999018 seconds) autopruneworker stdout | 2025-02-14 01:47:52,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494072316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:47:52,321 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:47:52,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:47:52,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:22 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:47:52,900 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:47:52,901 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:47:52 UTC)" (scheduled at 2025-02-14 01:47:52.900596+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:47:52,901 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."artifact_type_backfilled" = %s) OR ("t1"."artifact_type_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:47:52,902 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:05.898886+00:00 (in 12.996497 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:47:52,910 [71] [DEBUG] [__main__] Manifest artifact_type backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:47:52,911 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:47:52,911 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:48:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:47:53,021 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:47:54,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:47:54,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:47:54 UTC)" (scheduled at 2025-02-14 01:47:54.231161+00:00) securityworker stdout | 2025-02-14 01:47:54,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:47:54,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:47:54,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:59.232325+00:00 (in 4.999890 seconds) securityworker stdout | 2025-02-14 01:47:54,234 [88] [DEBUG] [urllib3.connectionpool] Starting new HTTP connection (1): quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 securityworker stdout | 2025-02-14 01:47:54,244 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:47:54,246 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:47:54,258 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:47:54,261 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:47:54,261 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:47:54,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:47:54,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:47:54 UTC)" (scheduled at 2025-02-14 01:47:54.390410+00:00) gcworker stdout | 2025-02-14 01:47:54,391 [64] [DEBUG] [peewee] ('SELECT DISTINCT "t1"."removed_tag_expiration_s" FROM "user" AS "t1" LIMIT %s', [100]) gcworker stdout | 2025-02-14 01:47:54,391 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:47:55.392556+00:00 (in 1.000813 seconds) gcworker stdout | 2025-02-14 01:47:54,399 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:47:54,399 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:47:54,399 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:24 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:47:54,836 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} notificationworker stdout | 2025-02-14 01:47:55,240 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} gcworker stdout | 2025-02-14 01:47:55,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:47:55,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:24.390410+00:00 (in 28.997430 seconds) gcworker stdout | 2025-02-14 01:47:55,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:25 UTC)" (scheduled at 2025-02-14 01:47:55.392556+00:00) gcworker stdout | 2025-02-14 01:47:55,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:47:55,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497375402, None, 1, 0]) gcworker stdout | 2025-02-14 01:47:55,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:47:55,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:47:55,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:47:55,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:05.803718+00:00 (in 9.999538 seconds) notificationworker stdout | 2025-02-14 01:47:55,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:05 UTC)" (scheduled at 2025-02-14 01:47:55.803718+00:00) notificationworker stdout | 2025-02-14 01:47:55,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:47:55,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 47, 55, 804473), True, datetime.datetime(2025, 2, 14, 1, 47, 55, 804473), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:47:55,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:47:55,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:47:55,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:05 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:47:56,014 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:47:56,444 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:47:56,831 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:47:57,229 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:47:57,515 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:47:57,640 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:47:57,898 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:47:58,210 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:47:58,331 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} repositorygcworker stdout | 2025-02-14 01:47:58,735 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gunicorn-web stdout | 2025-02-14 01:47:59,008 [244] [DEBUG] [app] Starting request: urn:request:93cd90a2-09cd-4636-9b43-db37704b64ae (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:47:59,009 [243] [DEBUG] [app] Starting request: urn:request:89def492-b35b-49df-94e5-3d8935f1d521 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:47:59,013 [244] [DEBUG] [urllib3.connectionpool] Starting new HTTPS connection (1): localhost:8443 gunicorn-web stdout | 2025-02-14 01:47:59,014 [243] [DEBUG] [urllib3.connectionpool] Starting new HTTPS connection (1): localhost:8443 gunicorn-web stdout | 2025-02-14 01:47:59,038 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:47:59,038 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:47:59,040 [246] [DEBUG] [app] Starting request: urn:request:ef25bf6b-655c-4962-92f6-9a6d7fa8973d (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:47:59,040 [246] [DEBUG] [app] Ending request: urn:request:ef25bf6b-655c-4962-92f6-9a6d7fa8973d (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:ef25bf6b-655c-4962-92f6-9a6d7fa8973d', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:47:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:47:59,041 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:47:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:47:59,041 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:47:59,041 [253] [DEBUG] [app] Starting request: urn:request:674bade5-30d0-4eec-97ba-d0aebcc25af6 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:47:59,042 [253] [DEBUG] [app] Ending request: urn:request:674bade5-30d0-4eec-97ba-d0aebcc25af6 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:674bade5-30d0-4eec-97ba-d0aebcc25af6', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:47:59,042 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:47:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.004 162 0.004) gunicorn-registry stdout | 2025-02-14 01:47:59,042 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:47:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:47:59,043 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:47:59,044 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:47:59,045 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:47:59,045 [242] [DEBUG] [app] Starting request: urn:request:c9888105-6e34-4186-be56-b6171ff3505e (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:47:59,046 [242] [DEBUG] [app] Ending request: urn:request:c9888105-6e34-4186-be56-b6171ff3505e (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:c9888105-6e34-4186-be56-b6171ff3505e', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:47:59,046 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:47:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:47:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.002) gunicorn-web stdout | 2025-02-14 01:47:59,047 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:47:59,047 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:47:59,048 [242] [DEBUG] [app] Starting request: urn:request:8d20695e-4991-4e98-8d55-b936d421ef34 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:47:59,048 [242] [DEBUG] [app] Ending request: urn:request:8d20695e-4991-4e98-8d55-b936d421ef34 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:8d20695e-4991-4e98-8d55-b936d421ef34', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:47:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:47:59,048 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:47:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:47:59,048 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:47:59,048 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name", "t1"."kid", "t1"."service", "t1"."jwk", "t1"."metadata", "t1"."created_date", "t1"."expiration_date", "t1"."rotation_duration", "t1"."approval_id" FROM "servicekey" AS "t1" LEFT OUTER JOIN "servicekeyapproval" AS "t2" ON ("t1"."approval_id" = "t2"."id") WHERE ((((NOT ("t1"."approval_id" IS %s) AND (("t1"."expiration_date" > %s) OR ("t1"."expiration_date" IS %s))) AND ("t1"."service" = %s)) AND (NOT (("t1"."service" = %s) AND ("t1"."expiration_date" <= %s)) OR NOT ((("t1"."service" = %s) AND ("t1"."approval_id" IS %s)) AND ("t1"."created_date" <= %s)))) AND (NOT ("t1"."expiration_date" <= %s) OR ("t1"."expiration_date" IS %s)))', [None, datetime.datetime(2025, 2, 14, 1, 47, 59, 47961), None, 'quay', 'quay', datetime.datetime(2025, 2, 14, 1, 47, 59, 47985), 'quay', None, datetime.datetime(2025, 2, 13, 1, 47, 59, 47999), datetime.datetime(2025, 2, 7, 1, 47, 59, 48006), None]) gunicorn-web stdout | 2025-02-14 01:47:59,050 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name", "t1"."kid", "t1"."service", "t1"."jwk", "t1"."metadata", "t1"."created_date", "t1"."expiration_date", "t1"."rotation_duration", "t1"."approval_id" FROM "servicekey" AS "t1" LEFT OUTER JOIN "servicekeyapproval" AS "t2" ON ("t1"."approval_id" = "t2"."id") WHERE ((((NOT ("t1"."approval_id" IS %s) AND (("t1"."expiration_date" > %s) OR ("t1"."expiration_date" IS %s))) AND ("t1"."service" = %s)) AND (NOT (("t1"."service" = %s) AND ("t1"."expiration_date" <= %s)) OR NOT ((("t1"."service" = %s) AND ("t1"."approval_id" IS %s)) AND ("t1"."created_date" <= %s)))) AND (NOT ("t1"."expiration_date" <= %s) OR ("t1"."expiration_date" IS %s)))', [None, datetime.datetime(2025, 2, 14, 1, 47, 59, 49880), None, 'quay', 'quay', datetime.datetime(2025, 2, 14, 1, 47, 59, 49903), 'quay', None, datetime.datetime(2025, 2, 13, 1, 47, 59, 49917), datetime.datetime(2025, 2, 7, 1, 47, 59, 49925), None]) gunicorn-web stdout | 2025-02-14 01:47:59,058 [243] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:47:59,059 [243] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:47:59,060 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:47:59,060 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:47:59,064 [243] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:47:59,064 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:47:59,065 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:47:59,066 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:47:59,066 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:47:59,068 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:47:59,069 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:47:59,070 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:47:59,071 [243] [DEBUG] [app] Ending request: urn:request:89def492-b35b-49df-94e5-3d8935f1d521 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:89def492-b35b-49df-94e5-3d8935f1d521', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:47:59,072 [243] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:47:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.067 47 0.067) gunicorn-web stdout | 2025-02-14 01:47:59,072 [243] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:47:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:47:59,073 [244] [DEBUG] [app] Ending request: urn:request:93cd90a2-09cd-4636-9b43-db37704b64ae (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:93cd90a2-09cd-4636-9b43-db37704b64ae', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:47:59,073 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:47:59,073 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:47:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:47:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.066 47 0.066) securityscanningnotificationworker stdout | 2025-02-14 01:47:59,123 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:47:59,123 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:48.125163+00:00 (in 49.001551 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:47:59,123 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:48:59 UTC)" (scheduled at 2025-02-14 01:47:59.123196+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:47:59,123 [87] [DEBUG] [workers.queueworker] Getting work item from queue. securityscanningnotificationworker stdout | 2025-02-14 01:47:59,124 [87] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 47, 59, 123865), True, datetime.datetime(2025, 2, 14, 1, 47, 59, 123865), 0, 'secscanv4/%', 50, 1, 0]) securityscanningnotificationworker stdout | 2025-02-14 01:47:59,134 [87] [DEBUG] [workers.queueworker] No more work. securityscanningnotificationworker stdout | 2025-02-14 01:47:59,134 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:47:59,134 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:48:59 UTC)" executed successfully securityworker stdout | 2025-02-14 01:47:59,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:47:59,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:24.231161+00:00 (in 24.998309 seconds) securityworker stdout | 2025-02-14 01:47:59,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:29 UTC)" (scheduled at 2025-02-14 01:47:59.232325+00:00) securityworker stdout | 2025-02-14 01:47:59,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:47:59,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:47:59,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:47:59,237 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:47:59,245 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:47:59,245 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:47:59,245 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:47:59,245 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:47:59,246 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:47:59,249 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:47:59,249 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:47:59,249 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:47:59,249 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:47:59,249 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:47:59,249 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:47:59,249 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:47:59,249 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:47:59,249 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:47:59,249 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:47:59,249 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:47:59,250 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 42, 59, 236841), 1, 2]) securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:47:59,252 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:47:59,252 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:47:59,252 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:47:59,252 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:47:59,252 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:47:59,252 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:47:59,252 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:47:59,252 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:47:59,252 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:47:59,252 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:47:59,252 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:47:59,253 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 42, 59, 236841), 1, 2]) securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:47:59,255 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:47:59,255 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:47:59,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:47:59,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:47:59,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:47:59,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:47:59,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:47:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:47:59,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:47:59,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:29 UTC)" executed successfully gcworker stdout | 2025-02-14 01:47:59,505 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} nginx stdout | 10.128.4.31 - - [14/Feb/2025:01:48:00 +0000] "GET / HTTP/1.1" 301 169 "-" "python-requests/2.32.2" nginx stdout | 10.129.2.27 - - [14/Feb/2025:01:48:00 +0000] "GET / HTTP/1.1" 301 169 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:48:01,220 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:48:01,222 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:48:01,224 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:48:01,233 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:48:01,235 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:48:01,247 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:48:02,033 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:48:02,419 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} gunicorn-secscan stdout | 2025-02-14 01:48:03,139 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:48:03,141 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:48:03,143 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:48:04,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:48:04,001 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:34.000511+00:00 (in 29.999362 seconds) buildlogsarchiver stdout | 2025-02-14 01:48:04,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:34 UTC)" (scheduled at 2025-02-14 01:48:04.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:48:04,002 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 48, 4, 1441), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:48:04,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:48:04,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:48:04,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:34 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:48:04,411 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:48:04,414 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:48:04,416 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:48:04,418 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:48:04,421 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:48:04,424 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:48:04,426 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:48:04,491 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:48:04,494 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:48:05,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:48:05,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:07.807092+00:00 (in 2.002939 seconds) notificationworker stdout | 2025-02-14 01:48:05,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:15 UTC)" (scheduled at 2025-02-14 01:48:05.803718+00:00) notificationworker stdout | 2025-02-14 01:48:05,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:48:05,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 48, 5, 804428), True, datetime.datetime(2025, 2, 14, 1, 48, 5, 804428), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:48:05,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:48:05,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:48:05,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:15 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:48:05,899 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:48:05,899 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:52.900596+00:00 (in 47.001242 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:48:05,899 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:05 UTC)" (scheduled at 2025-02-14 01:48:05.898886+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:48:05,900 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."subject_backfilled" = %s) OR ("t1"."subject_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:48:05,908 [71] [DEBUG] [__main__] Manifest subject backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:48:05,908 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:48:05,908 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:05 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:48:07,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:48:07,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:15.803718+00:00 (in 7.996121 seconds) notificationworker stdout | 2025-02-14 01:48:07,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:07 UTC)" (scheduled at 2025-02-14 01:48:07.807092+00:00) notificationworker stdout | 2025-02-14 01:48:07,807 [75] [DEBUG] [workers.queueworker] Running watchdog. notificationworker stdout | 2025-02-14 01:48:07,807 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:07 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:48:12,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:48:12,506 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:45.503718+00:00 (in 32.997555 seconds) namespacegcworker stdout | 2025-02-14 01:48:12,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:12 UTC)" (scheduled at 2025-02-14 01:48:12.505687+00:00) namespacegcworker stdout | 2025-02-14 01:48:12,506 [73] [DEBUG] [workers.queueworker] Running watchdog. namespacegcworker stdout | 2025-02-14 01:48:12,506 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:12 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:48:14,007 [242] [DEBUG] [app] Starting request: urn:request:228f1d14-08cd-448c-844b-8ba2cc921539 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:48:14,008 [244] [DEBUG] [app] Starting request: urn:request:e0e2dd26-8864-463b-b8e3-8e0ddd4c25c6 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:48:14,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:14,010 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:14,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:48:14,012 [246] [DEBUG] [app] Starting request: urn:request:fc564950-50a5-4c15-8501-c5960c3807a8 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:48:14,012 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:48:14,012 [246] [DEBUG] [app] Ending request: urn:request:fc564950-50a5-4c15-8501-c5960c3807a8 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:fc564950-50a5-4c15-8501-c5960c3807a8', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:48:14,012 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-web stdout | 2025-02-14 01:48:14,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:48:14,014 [252] [DEBUG] [app] Starting request: urn:request:a1585642-e48f-40e7-bc3b-88b2e5ef7427 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:48:14,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-registry stdout | 2025-02-14 01:48:14,015 [252] [DEBUG] [app] Ending request: urn:request:a1585642-e48f-40e7-bc3b-88b2e5ef7427 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:a1585642-e48f-40e7-bc3b-88b2e5ef7427', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.003 162 0.003) gunicorn-registry stdout | 2025-02-14 01:48:14,015 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:48:14,015 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:14,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:48:14,017 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:14,019 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:48:14,020 [245] [DEBUG] [app] Starting request: urn:request:e1a33951-1719-4648-9477-844ba0d97498 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:48:14,020 [243] [DEBUG] [app] Starting request: urn:request:8fda361d-2a2c-4057-b65e-8f06392c675b (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:48:14,020 [245] [DEBUG] [app] Ending request: urn:request:e1a33951-1719-4648-9477-844ba0d97498 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:e1a33951-1719-4648-9477-844ba0d97498', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:48:14,020 [243] [DEBUG] [app] Ending request: urn:request:8fda361d-2a2c-4057-b65e-8f06392c675b (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:8fda361d-2a2c-4057-b65e-8f06392c675b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:48:14,021 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:48:14,021 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:14,021 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:14,021 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.005 159 0.005) gunicorn-web stdout | 2025-02-14 01:48:14,021 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:48:14,021 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:48:14,022 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:48:14,022 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:48:14,027 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:48:14,027 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:48:14,027 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:48:14,027 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:48:14,034 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:48:14,034 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:48:14,037 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:48:14,037 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:48:14,039 [242] [DEBUG] [app] Ending request: urn:request:228f1d14-08cd-448c-844b-8ba2cc921539 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:228f1d14-08cd-448c-844b-8ba2cc921539', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:48:14,039 [244] [DEBUG] [app] Ending request: urn:request:e0e2dd26-8864-463b-b8e3-8e0ddd4c25c6 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:e0e2dd26-8864-463b-b8e3-8e0ddd4c25c6', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:48:14,039 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:48:14,039 [244] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:48:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.033 47 0.034) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:48:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.033 47 0.032) gunicorn-web stdout | 2025-02-14 01:48:14,040 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:48:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:48:14,040 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:48:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" exportactionlogsworker stdout | 2025-02-14 01:48:14,478 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:48:14,578 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} notificationworker stdout | 2025-02-14 01:48:15,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:48:15,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:25.803718+00:00 (in 9.999477 seconds) notificationworker stdout | 2025-02-14 01:48:15,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:25 UTC)" (scheduled at 2025-02-14 01:48:15.803718+00:00) notificationworker stdout | 2025-02-14 01:48:15,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:48:15,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 48, 15, 804571), True, datetime.datetime(2025, 2, 14, 1, 48, 15, 804571), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:48:15,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:48:15,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:48:15,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:25 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:48:16,665 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} namespacegcworker stdout | 2025-02-14 01:48:20,032 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:48:20,336 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} expiredappspecifictokenworker stdout | 2025-02-14 01:48:22,156 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} autopruneworker stdout | 2025-02-14 01:48:22,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:48:22,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:52.310342+00:00 (in 29.999561 seconds) autopruneworker stdout | 2025-02-14 01:48:22,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:52 UTC)" (scheduled at 2025-02-14 01:48:22.310342+00:00) autopruneworker stdout | 2025-02-14 01:48:22,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494102316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:48:22,320 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:48:22,320 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:48:22,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:48:23,051 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:48:24,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:48:24,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:29.232325+00:00 (in 5.000690 seconds) securityworker stdout | 2025-02-14 01:48:24,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:54 UTC)" (scheduled at 2025-02-14 01:48:24.231161+00:00) securityworker stdout | 2025-02-14 01:48:24,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:48:24,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:48:24,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:48:24,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:48:24,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:48:24,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:48:24,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:54 UTC)" executed successfully gcworker stdout | 2025-02-14 01:48:24,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:48:24,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:25.392556+00:00 (in 1.001707 seconds) gcworker stdout | 2025-02-14 01:48:24,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:54 UTC)" (scheduled at 2025-02-14 01:48:24.390410+00:00) gcworker stdout | 2025-02-14 01:48:24,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:48:24,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:54 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:48:24,873 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} exportactionlogsworker stdout | 2025-02-14 01:48:25,216 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:48:25,216 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:30.212654+00:00 (in 4.996324 seconds) exportactionlogsworker stdout | 2025-02-14 01:48:25,216 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:25 UTC)" (scheduled at 2025-02-14 01:48:25.215238+00:00) exportactionlogsworker stdout | 2025-02-14 01:48:25,216 [63] [DEBUG] [workers.queueworker] Running watchdog. exportactionlogsworker stdout | 2025-02-14 01:48:25,216 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:48:25,276 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} gcworker stdout | 2025-02-14 01:48:25,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:48:25,393 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:54.390410+00:00 (in 28.997418 seconds) gcworker stdout | 2025-02-14 01:48:25,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:55 UTC)" (scheduled at 2025-02-14 01:48:25.392556+00:00) gcworker stdout | 2025-02-14 01:48:25,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:48:25,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497405401, None, 1, 0]) gcworker stdout | 2025-02-14 01:48:25,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:48:25,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:55 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:48:25,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:48:25,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:35.803718+00:00 (in 9.999472 seconds) notificationworker stdout | 2025-02-14 01:48:25,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:35 UTC)" (scheduled at 2025-02-14 01:48:25.803718+00:00) notificationworker stdout | 2025-02-14 01:48:25,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:48:25,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 48, 25, 804588), True, datetime.datetime(2025, 2, 14, 1, 48, 25, 804588), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:48:25,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:48:25,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:48:25,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:35 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:48:26,044 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:48:26,480 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:48:26,867 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:48:27,250 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:48:27,548 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:48:27,676 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:48:27,934 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:48:28,230 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:48:28,367 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} repositorygcworker stdout | 2025-02-14 01:48:28,765 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gunicorn-web stdout | 2025-02-14 01:48:29,007 [244] [DEBUG] [app] Starting request: urn:request:81b6aaed-7e95-4f2c-a592-5d0772990c8e (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:48:29,007 [245] [DEBUG] [app] Starting request: urn:request:c542e23a-a536-4423-876c-59d00d6ff655 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:48:29,008 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:29,011 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:48:29,011 [245] [DEBUG] [urllib3.connectionpool] Starting new HTTPS connection (1): localhost:8443 gunicorn-registry stdout | 2025-02-14 01:48:29,012 [246] [DEBUG] [app] Starting request: urn:request:f231a083-8b25-43b4-986e-2da6d1bb072c (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:48:29,012 [246] [DEBUG] [app] Ending request: urn:request:f231a083-8b25-43b4-986e-2da6d1bb072c (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:f231a083-8b25-43b4-986e-2da6d1bb072c', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-registry stdout | 2025-02-14 01:48:29,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:48:29,013 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:29,014 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:29,016 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:48:29,017 [242] [DEBUG] [app] Starting request: urn:request:a1429290-7fcf-4086-a6db-d64c2c1fc053 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:48:29,017 [242] [DEBUG] [app] Ending request: urn:request:a1429290-7fcf-4086-a6db-d64c2c1fc053 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:a1429290-7fcf-4086-a6db-d64c2c1fc053', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:48:29,017 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:48:29,018 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:29,018 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:48:29,018 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:48:29,024 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:48:29,024 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:48:29,024 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:48:29,025 [246] [DEBUG] [app] Starting request: urn:request:d32f5a17-482c-45b0-b8c5-9b15f132a883 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:48:29,026 [246] [DEBUG] [app] Ending request: urn:request:d32f5a17-482c-45b0-b8c5-9b15f132a883 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:d32f5a17-482c-45b0-b8c5-9b15f132a883', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-registry stdout | 2025-02-14 01:48:29,026 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:48:29,026 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:29,028 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:29,030 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:48:29,031 [242] [DEBUG] [app] Starting request: urn:request:f0f81ae8-da82-4006-96c4-986249121dde (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:48:29,031 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:48:29,031 [242] [DEBUG] [app] Ending request: urn:request:f0f81ae8-da82-4006-96c4-986249121dde (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:f0f81ae8-da82-4006-96c4-986249121dde', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:48:29,031 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:48:29,031 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:29,033 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name", "t1"."kid", "t1"."service", "t1"."jwk", "t1"."metadata", "t1"."created_date", "t1"."expiration_date", "t1"."rotation_duration", "t1"."approval_id" FROM "servicekey" AS "t1" LEFT OUTER JOIN "servicekeyapproval" AS "t2" ON ("t1"."approval_id" = "t2"."id") WHERE ((((NOT ("t1"."approval_id" IS %s) AND (("t1"."expiration_date" > %s) OR ("t1"."expiration_date" IS %s))) AND ("t1"."service" = %s)) AND (NOT (("t1"."service" = %s) AND ("t1"."expiration_date" <= %s)) OR NOT ((("t1"."service" = %s) AND ("t1"."approval_id" IS %s)) AND ("t1"."created_date" <= %s)))) AND (NOT ("t1"."expiration_date" <= %s) OR ("t1"."expiration_date" IS %s)))', [None, datetime.datetime(2025, 2, 14, 1, 48, 29, 32762), None, 'quay', 'quay', datetime.datetime(2025, 2, 14, 1, 48, 29, 32785), 'quay', None, datetime.datetime(2025, 2, 13, 1, 48, 29, 32799), datetime.datetime(2025, 2, 7, 1, 48, 29, 32808), None]) gunicorn-web stdout | 2025-02-14 01:48:29,033 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:48:29,036 [244] [DEBUG] [app] Ending request: urn:request:81b6aaed-7e95-4f2c-a592-5d0772990c8e (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:81b6aaed-7e95-4f2c-a592-5d0772990c8e', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:48:29,036 [244] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:48:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:48:29,036 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:48:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:48:29,042 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:48:29,043 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:48:29,048 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:48:29,048 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:48:29,051 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:48:29,053 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:48:29,055 [245] [DEBUG] [app] Ending request: urn:request:c542e23a-a536-4423-876c-59d00d6ff655 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:c542e23a-a536-4423-876c-59d00d6ff655', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:48:29,056 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:48:29,056 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:48:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:48:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.050 47 0.050) securityworker stdout | 2025-02-14 01:48:29,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:48:29,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:54.231161+00:00 (in 24.998355 seconds) securityworker stdout | 2025-02-14 01:48:29,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:59 UTC)" (scheduled at 2025-02-14 01:48:29.232325+00:00) securityworker stdout | 2025-02-14 01:48:29,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:48:29,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:48:29,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:48:29,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:48:29,245 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:48:29,245 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:48:29,245 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:48:29,245 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:48:29,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:48:29,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:48:29,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:48:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:48:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:48:29,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:48:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:48:29,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:48:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:48:29,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:48:29,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:48:29,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:48:29,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 43, 29, 236689), 1, 2]) securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:48:29,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:48:29,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:48:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:48:29,252 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:48:29,252 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:48:29,252 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:48:29,252 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:48:29,252 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:48:29,252 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:48:29,252 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:48:29,252 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:48:29,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 43, 29, 236689), 1, 2]) securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:48:29,255 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:48:29,255 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:48:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:48:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:48:29,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:48:29,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:48:29,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:48:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:48:29,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:48:29,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:48:59 UTC)" executed successfully gcworker stdout | 2025-02-14 01:48:29,530 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} exportactionlogsworker stdout | 2025-02-14 01:48:30,212 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:48:30,213 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:25.215238+00:00 (in 55.002147 seconds) exportactionlogsworker stdout | 2025-02-14 01:48:30,213 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:30 UTC)" (scheduled at 2025-02-14 01:48:30.212654+00:00) exportactionlogsworker stdout | 2025-02-14 01:48:30,213 [63] [DEBUG] [workers.queueworker] Getting work item from queue. exportactionlogsworker stdout | 2025-02-14 01:48:30,214 [63] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 48, 30, 213391), True, datetime.datetime(2025, 2, 14, 1, 48, 30, 213391), 0, 'exportactionlogs/%', 50, 1, 0]) exportactionlogsworker stdout | 2025-02-14 01:48:30,223 [63] [DEBUG] [workers.queueworker] No more work. exportactionlogsworker stdout | 2025-02-14 01:48:30,223 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:48:30,223 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:30 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:48:31,229 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:48:31,231 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:48:31,234 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:48:31,240 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:48:31,243 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:48:31,267 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:48:32,067 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:48:32,446 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} repositorygcworker stdout | 2025-02-14 01:48:33,011 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:48:33,012 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:36.014770+00:00 (in 3.002703 seconds) repositorygcworker stdout | 2025-02-14 01:48:33,012 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:33 UTC)" (scheduled at 2025-02-14 01:48:33.011632+00:00) repositorygcworker stdout | 2025-02-14 01:48:33,012 [85] [DEBUG] [workers.queueworker] Getting work item from queue. repositorygcworker stdout | 2025-02-14 01:48:33,013 [85] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 48, 33, 12391), True, datetime.datetime(2025, 2, 14, 1, 48, 33, 12391), 0, 'repositorygc/%', 50, 1, 0]) repositorygcworker stdout | 2025-02-14 01:48:33,022 [85] [DEBUG] [workers.queueworker] No more work. repositorygcworker stdout | 2025-02-14 01:48:33,022 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:48:33,023 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:33 UTC)" executed successfully gunicorn-secscan stdout | 2025-02-14 01:48:33,148 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:48:33,151 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:48:33,153 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:48:34,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:48:34,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:04.000511+00:00 (in 29.999547 seconds) buildlogsarchiver stdout | 2025-02-14 01:48:34,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:04 UTC)" (scheduled at 2025-02-14 01:48:34.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:48:34,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 48, 34, 1226), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:48:34,010 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:48:34,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:48:34,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:04 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:48:34,421 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:48:34,425 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:48:34,427 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:48:34,430 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:48:34,433 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:48:34,436 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:48:34,438 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:48:34,499 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:48:34,505 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:48:35,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:48:35,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:45.803718+00:00 (in 9.999507 seconds) notificationworker stdout | 2025-02-14 01:48:35,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:45 UTC)" (scheduled at 2025-02-14 01:48:35.803718+00:00) notificationworker stdout | 2025-02-14 01:48:35,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:48:35,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 48, 35, 804515), True, datetime.datetime(2025, 2, 14, 1, 48, 35, 804515), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:48:35,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:48:35,815 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:48:35,815 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:45 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:48:36,014 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:48:36,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:33.011632+00:00 (in 56.996412 seconds) repositorygcworker stdout | 2025-02-14 01:48:36,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:36 UTC)" (scheduled at 2025-02-14 01:48:36.014770+00:00) repositorygcworker stdout | 2025-02-14 01:48:36,015 [85] [DEBUG] [workers.queueworker] Running watchdog. repositorygcworker stdout | 2025-02-14 01:48:36,015 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:36 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:48:44,007 [243] [DEBUG] [app] Starting request: urn:request:97555097-3dbe-4e20-962f-d23f964167ef (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:48:44,007 [245] [DEBUG] [app] Starting request: urn:request:0829af56-b6fe-4171-aa36-4c70afc73b90 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:48:44,008 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:44,008 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:44,011 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:48:44,011 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:48:44,012 [246] [DEBUG] [app] Starting request: urn:request:225c4d1c-91b9-4193-af5d-c532c3106854 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:48:44,012 [246] [DEBUG] [app] Ending request: urn:request:225c4d1c-91b9-4193-af5d-c532c3106854 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:225c4d1c-91b9-4193-af5d-c532c3106854', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:48:44,012 [253] [DEBUG] [app] Starting request: urn:request:d71a1fe2-cb88-41a7-a8d8-2a52099553a1 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:48:44,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-registry stdout | 2025-02-14 01:48:44,013 [253] [DEBUG] [app] Ending request: urn:request:d71a1fe2-cb88-41a7-a8d8-2a52099553a1 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:d71a1fe2-cb88-41a7-a8d8-2a52099553a1', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:48:44,013 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-registry stdout | 2025-02-14 01:48:44,013 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:48:44,013 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:44,014 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:44,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:44,016 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:48:44,017 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:48:44,017 [242] [DEBUG] [app] Starting request: urn:request:f98cc881-d31d-4fe6-8400-66d8bed7ce59 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:48:44,018 [242] [DEBUG] [app] Ending request: urn:request:f98cc881-d31d-4fe6-8400-66d8bed7ce59 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:f98cc881-d31d-4fe6-8400-66d8bed7ce59', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:48:44,018 [244] [DEBUG] [app] Starting request: urn:request:42b200e1-53c4-442c-b1cd-f9985c741a67 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:48:44,018 [244] [DEBUG] [app] Ending request: urn:request:42b200e1-53c4-442c-b1cd-f9985c741a67 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:42b200e1-53c4-442c-b1cd-f9985c741a67', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:48:44,018 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.002) gunicorn-web stdout | 2025-02-14 01:48:44,018 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:44,019 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:48:44,019 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:44,019 [243] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:48:44,019 [243] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:48:44,019 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:48:44,019 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:48:44,024 [243] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:48:44,025 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:48:44,025 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:48:44,025 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:48:44,032 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:48:44,032 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:48:44,034 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:48:44,034 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:48:44,037 [243] [DEBUG] [app] Ending request: urn:request:97555097-3dbe-4e20-962f-d23f964167ef (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:97555097-3dbe-4e20-962f-d23f964167ef', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:48:44,037 [245] [DEBUG] [app] Ending request: urn:request:0829af56-b6fe-4171-aa36-4c70afc73b90 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:0829af56-b6fe-4171-aa36-4c70afc73b90', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:48:44,037 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:48:44,037 [243] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:48:44,037 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:48:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:48:44,037 [243] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:48:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:48:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.032) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:48:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.032) exportactionlogsworker stdout | 2025-02-14 01:48:44,495 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:48:44,602 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} namespacegcworker stdout | 2025-02-14 01:48:45,503 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:48:45,504 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:58.505410+00:00 (in 13.001215 seconds) namespacegcworker stdout | 2025-02-14 01:48:45,504 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:45 UTC)" (scheduled at 2025-02-14 01:48:45.503718+00:00) namespacegcworker stdout | 2025-02-14 01:48:45,504 [73] [DEBUG] [workers.queueworker] Getting work item from queue. namespacegcworker stdout | 2025-02-14 01:48:45,505 [73] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 48, 45, 504493), True, datetime.datetime(2025, 2, 14, 1, 48, 45, 504493), 0, 'namespacegc/%', 50, 1, 0]) namespacegcworker stdout | 2025-02-14 01:48:45,514 [73] [DEBUG] [workers.queueworker] No more work. namespacegcworker stdout | 2025-02-14 01:48:45,514 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:48:45,514 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:45 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:48:45,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:48:45,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:55.803718+00:00 (in 9.999550 seconds) notificationworker stdout | 2025-02-14 01:48:45,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:55 UTC)" (scheduled at 2025-02-14 01:48:45.803718+00:00) notificationworker stdout | 2025-02-14 01:48:45,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:48:45,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 48, 45, 804366), True, datetime.datetime(2025, 2, 14, 1, 48, 45, 804366), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:48:45,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:48:45,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:48:45,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:48:55 UTC)" executed successfully quotaregistrysizeworker stdout | 2025-02-14 01:48:46,009 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:48:46,010 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:46.009738+00:00 (in 59.999506 seconds) quotaregistrysizeworker stdout | 2025-02-14 01:48:46,010 [78] [INFO] [apscheduler.executors.default] Running job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:46 UTC)" (scheduled at 2025-02-14 01:48:46.009738+00:00) quotaregistrysizeworker stdout | 2025-02-14 01:48:46,010 [78] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."size_bytes", "t1"."running", "t1"."queued", "t1"."completed_ms" FROM "quotaregistrysize" AS "t1" LIMIT %s OFFSET %s', [1, 0]) quotaregistrysizeworker stdout | 2025-02-14 01:48:46,019 [78] [DEBUG] [data.database] Disconnecting from database. quotaregistrysizeworker stdout | 2025-02-14 01:48:46,019 [78] [INFO] [apscheduler.executors.default] Job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:46 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:48:46,701 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:48:48,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:48:48,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:59.123196+00:00 (in 10.997552 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:48:48,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:48 UTC)" (scheduled at 2025-02-14 01:48:48.125163+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:48:48,125 [87] [DEBUG] [workers.queueworker] Running watchdog. securityscanningnotificationworker stdout | 2025-02-14 01:48:48,125 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:48 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:48:50,068 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:48:50,372 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} expiredappspecifictokenworker stdout | 2025-02-14 01:48:52,192 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} autopruneworker stdout | 2025-02-14 01:48:52,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:48:52,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:22.310342+00:00 (in 29.999560 seconds) autopruneworker stdout | 2025-02-14 01:48:52,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:22 UTC)" (scheduled at 2025-02-14 01:48:52.310342+00:00) autopruneworker stdout | 2025-02-14 01:48:52,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494132316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:48:52,321 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:48:52,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:48:52,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:22 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:48:52,900 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:48:52,901 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:05.898886+00:00 (in 12.997836 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:48:52,901 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:52 UTC)" (scheduled at 2025-02-14 01:48:52.900596+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:48:52,901 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."artifact_type_backfilled" = %s) OR ("t1"."artifact_type_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:48:52,910 [71] [DEBUG] [__main__] Manifest artifact_type backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:48:52,910 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:48:52,910 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:48:53,063 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:48:54,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:48:54,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:59.232325+00:00 (in 5.000705 seconds) securityworker stdout | 2025-02-14 01:48:54,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:24 UTC)" (scheduled at 2025-02-14 01:48:54.231161+00:00) securityworker stdout | 2025-02-14 01:48:54,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:48:54,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:48:54,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:48:54,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:48:54,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:48:54,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:48:54,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:48:54,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:48:54,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:48:55.392556+00:00 (in 1.001746 seconds) gcworker stdout | 2025-02-14 01:48:54,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:24 UTC)" (scheduled at 2025-02-14 01:48:54.390410+00:00) gcworker stdout | 2025-02-14 01:48:54,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:48:54,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:24 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:48:54,909 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} notificationworker stdout | 2025-02-14 01:48:55,304 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} gcworker stdout | 2025-02-14 01:48:55,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:48:55,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:24.390410+00:00 (in 28.997441 seconds) gcworker stdout | 2025-02-14 01:48:55,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:25 UTC)" (scheduled at 2025-02-14 01:48:55.392556+00:00) gcworker stdout | 2025-02-14 01:48:55,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:48:55,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497435401, None, 1, 0]) gcworker stdout | 2025-02-14 01:48:55,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:48:55,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:48:55,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:48:55,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:05.803718+00:00 (in 9.999558 seconds) notificationworker stdout | 2025-02-14 01:48:55,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:05 UTC)" (scheduled at 2025-02-14 01:48:55.803718+00:00) notificationworker stdout | 2025-02-14 01:48:55,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:48:55,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 48, 55, 804367), True, datetime.datetime(2025, 2, 14, 1, 48, 55, 804367), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:48:55,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:48:55,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:48:55,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:05 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:48:56,058 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:48:56,516 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:48:56,903 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:48:57,268 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:48:57,580 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:48:57,694 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:48:57,970 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:48:58,258 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:48:58,376 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} namespacegcworker stdout | 2025-02-14 01:48:58,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:48:58,505 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:12.505687+00:00 (in 13.999822 seconds) namespacegcworker stdout | 2025-02-14 01:48:58,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:53:58 UTC)" (scheduled at 2025-02-14 01:48:58.505410+00:00) namespacegcworker stdout | 2025-02-14 01:48:58,506 [73] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [False, datetime.datetime(2025, 2, 14, 1, 48, 58, 506143), 'namespacegc/%']) namespacegcworker stdout | 2025-02-14 01:48:58,516 [73] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 48, 58, 506143), True, datetime.datetime(2025, 2, 14, 1, 48, 58, 506143), 0, 'namespacegc/%']) namespacegcworker stdout | 2025-02-14 01:48:58,519 [73] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) AND NOT ("t1"."queue_name" IN (SELECT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s)))))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 48, 58, 506143), True, datetime.datetime(2025, 2, 14, 1, 48, 58, 506143), 0, 'namespacegc/%', False, datetime.datetime(2025, 2, 14, 1, 48, 58, 506143), 'namespacegc/%']) namespacegcworker stdout | 2025-02-14 01:48:58,522 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:48:58,522 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:53:58 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:48:58,795 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gunicorn-web stdout | 2025-02-14 01:48:59,007 [245] [DEBUG] [app] Starting request: urn:request:d893e1e8-f502-4cb5-be24-4b899c8b1f89 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:48:59,008 [244] [DEBUG] [app] Starting request: urn:request:5010ba2d-3dbb-4e45-aa68-a28cdc580ff9 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:48:59,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:59,010 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:59,011 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:48:59,012 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:48:59,013 [246] [DEBUG] [app] Starting request: urn:request:b268266a-f7a2-421d-917f-5a3a8b87a947 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:48:59,013 [246] [DEBUG] [app] Ending request: urn:request:b268266a-f7a2-421d-917f-5a3a8b87a947 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:b268266a-f7a2-421d-917f-5a3a8b87a947', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:48:59,013 [253] [DEBUG] [app] Starting request: urn:request:92762350-65f6-4874-abdd-e1dcbd6bd5b8 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:48:59,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-registry stdout | 2025-02-14 01:48:59,013 [253] [DEBUG] [app] Ending request: urn:request:92762350-65f6-4874-abdd-e1dcbd6bd5b8 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:92762350-65f6-4874-abdd-e1dcbd6bd5b8', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:48:59,014 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:48:59,014 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:48:59,014 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:59,015 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:59,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:48:59,017 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:48:59,018 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:48:59,018 [243] [DEBUG] [app] Starting request: urn:request:e801ee36-536d-4d2c-b581-362282f20f91 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:48:59,018 [243] [DEBUG] [app] Ending request: urn:request:e801ee36-536d-4d2c-b581-362282f20f91 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:e801ee36-536d-4d2c-b581-362282f20f91', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:48:59,019 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:48:59,019 [242] [DEBUG] [app] Starting request: urn:request:459ac185-e73d-40ab-a41b-939d800ef05b (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:48:59,019 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:48:59,019 [242] [DEBUG] [app] Ending request: urn:request:459ac185-e73d-40ab-a41b-939d800ef05b (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:459ac185-e73d-40ab-a41b-939d800ef05b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:48:59,019 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:48:59,019 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:48:59,019 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:48:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:48:59,020 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:48:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:48:59,020 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:48:59,020 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:48:59,025 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:48:59,025 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:48:59,025 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:48:59,025 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:48:59,032 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:48:59,032 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:48:59,034 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:48:59,034 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:48:59,036 [244] [DEBUG] [app] Ending request: urn:request:5010ba2d-3dbb-4e45-aa68-a28cdc580ff9 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:5010ba2d-3dbb-4e45-aa68-a28cdc580ff9', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:48:59,037 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:48:59,037 [245] [DEBUG] [app] Ending request: urn:request:d893e1e8-f502-4cb5-be24-4b899c8b1f89 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:d893e1e8-f502-4cb5-be24-4b899c8b1f89', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:48:59,037 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:48:59,037 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:48:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:48:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.029 47 0.029) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:48:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.031) gunicorn-web stdout | 2025-02-14 01:48:59,037 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:48:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" securityscanningnotificationworker stdout | 2025-02-14 01:48:59,123 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:48:59,123 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:48.125163+00:00 (in 49.001498 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:48:59,123 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:59 UTC)" (scheduled at 2025-02-14 01:48:59.123196+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:48:59,123 [87] [DEBUG] [workers.queueworker] Getting work item from queue. securityscanningnotificationworker stdout | 2025-02-14 01:48:59,124 [87] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 48, 59, 123977), True, datetime.datetime(2025, 2, 14, 1, 48, 59, 123977), 0, 'secscanv4/%', 50, 1, 0]) securityscanningnotificationworker stdout | 2025-02-14 01:48:59,133 [87] [DEBUG] [workers.queueworker] No more work. securityscanningnotificationworker stdout | 2025-02-14 01:48:59,133 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:48:59,134 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:49:59 UTC)" executed successfully securityworker stdout | 2025-02-14 01:48:59,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:48:59,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:24.231161+00:00 (in 24.998409 seconds) securityworker stdout | 2025-02-14 01:48:59,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:29 UTC)" (scheduled at 2025-02-14 01:48:59.232325+00:00) securityworker stdout | 2025-02-14 01:48:59,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:48:59,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:48:59,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:48:59,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:48:59,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:48:59,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:48:59,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:48:59,244 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:48:59,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:48:59,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:48:59,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:48:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:48:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:48:59,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:48:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:48:59,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:48:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:48:59,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:48:59,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:48:59,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:48:59,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 43, 59, 236425), 1, 2]) securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:48:59,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:48:59,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:48:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:48:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:48:59,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:48:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:48:59,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:48:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:48:59,251 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:48:59,251 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:48:59,251 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:48:59,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 43, 59, 236425), 1, 2]) securityworker stdout | 2025-02-14 01:48:59,254 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:48:59,254 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:48:59,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:48:59,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:48:59,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:48:59,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:48:59,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:48:59,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:48:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:48:59,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:29 UTC)" executed successfully gcworker stdout | 2025-02-14 01:48:59,564 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} gunicorn-web stdout | 2025-02-14 01:49:01,237 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:49:01,240 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:49:01,243 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:49:01,247 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:49:01,250 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:49:01,277 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:49:02,102 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:49:02,480 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} gunicorn-secscan stdout | 2025-02-14 01:49:03,156 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:49:03,159 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:49:03,161 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:49:04,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:49:04,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:34.000511+00:00 (in 29.999534 seconds) buildlogsarchiver stdout | 2025-02-14 01:49:04,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:34 UTC)" (scheduled at 2025-02-14 01:49:04.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:49:04,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 49, 4, 1281), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:49:04,010 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:49:04,010 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:49:04,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:34 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:49:04,433 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:49:04,435 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:49:04,439 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:49:04,443 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:49:04,445 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:49:04,448 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:49:04,450 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:49:04,508 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:49:04,516 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:49:05,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:49:05,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:07.807092+00:00 (in 2.002922 seconds) notificationworker stdout | 2025-02-14 01:49:05,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:15 UTC)" (scheduled at 2025-02-14 01:49:05.803718+00:00) notificationworker stdout | 2025-02-14 01:49:05,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:49:05,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 49, 5, 804378), True, datetime.datetime(2025, 2, 14, 1, 49, 5, 804378), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:49:05,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:49:05,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:49:05,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:15 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:49:05,899 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:49:05,899 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:52.900596+00:00 (in 47.001272 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:49:05,899 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:05 UTC)" (scheduled at 2025-02-14 01:49:05.898886+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:49:05,900 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."subject_backfilled" = %s) OR ("t1"."subject_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:49:05,908 [71] [DEBUG] [__main__] Manifest subject backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:49:05,908 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:49:05,908 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:05 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:49:07,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:49:07,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:15.803718+00:00 (in 7.996172 seconds) notificationworker stdout | 2025-02-14 01:49:07,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:07 UTC)" (scheduled at 2025-02-14 01:49:07.807092+00:00) notificationworker stdout | 2025-02-14 01:49:07,807 [75] [DEBUG] [workers.queueworker] Running watchdog. notificationworker stdout | 2025-02-14 01:49:07,807 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:07 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:49:12,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:49:12,506 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:45.503718+00:00 (in 32.997561 seconds) namespacegcworker stdout | 2025-02-14 01:49:12,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:12 UTC)" (scheduled at 2025-02-14 01:49:12.505687+00:00) namespacegcworker stdout | 2025-02-14 01:49:12,506 [73] [DEBUG] [workers.queueworker] Running watchdog. namespacegcworker stdout | 2025-02-14 01:49:12,506 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:12 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:49:14,007 [242] [DEBUG] [app] Starting request: urn:request:05ec4469-bc29-4e96-8b9e-da14c148672a (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:49:14,007 [245] [DEBUG] [app] Starting request: urn:request:cf2d5ed4-8dea-4aa8-bcb7-fc49d80de650 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:49:14,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:14,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:14,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:14,012 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:49:14,012 [246] [DEBUG] [app] Starting request: urn:request:699defde-653c-4847-9e69-936f5cf441b5 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:49:14,013 [246] [DEBUG] [app] Ending request: urn:request:699defde-653c-4847-9e69-936f5cf441b5 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:699defde-653c-4847-9e69-936f5cf441b5', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:49:14,013 [252] [DEBUG] [app] Starting request: urn:request:b7888682-8cb1-4813-acd7-a7514cd47038 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:49:14,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-registry stdout | 2025-02-14 01:49:14,013 [252] [DEBUG] [app] Ending request: urn:request:b7888682-8cb1-4813-acd7-a7514cd47038 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:b7888682-8cb1-4813-acd7-a7514cd47038', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:49:14,013 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:49:14,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:49:14,014 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-web stdout | 2025-02-14 01:49:14,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:14,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:14,017 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:14,017 [242] [DEBUG] [app] Starting request: urn:request:59fea245-3319-4029-88d5-e3921dd2f893 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:49:14,018 [242] [DEBUG] [app] Ending request: urn:request:59fea245-3319-4029-88d5-e3921dd2f893 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:59fea245-3319-4029-88d5-e3921dd2f893', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:49:14,018 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:49:14,018 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:49:14,018 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:49:14,019 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:14,019 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:49:14,020 [244] [DEBUG] [app] Starting request: urn:request:467185da-0b9a-4412-9427-05b5e4a63c10 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:49:14,020 [244] [DEBUG] [app] Ending request: urn:request:467185da-0b9a-4412-9427-05b5e4a63c10 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:467185da-0b9a-4412-9427-05b5e4a63c10', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:49:14,020 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:49:14,020 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:49:14,021 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:49:14,021 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:49:14,024 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:49:14,024 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:49:14,026 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:49:14,026 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:49:14,031 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:49:14,033 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:49:14,034 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:49:14,036 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:49:14,036 [245] [DEBUG] [app] Ending request: urn:request:cf2d5ed4-8dea-4aa8-bcb7-fc49d80de650 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:cf2d5ed4-8dea-4aa8-bcb7-fc49d80de650', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:49:14,036 [245] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:49:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:49:14,036 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:49:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:49:14,038 [242] [DEBUG] [app] Ending request: urn:request:05ec4469-bc29-4e96-8b9e-da14c148672a (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:05ec4469-bc29-4e96-8b9e-da14c148672a', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:49:14,038 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:49:14,038 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:49:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:49:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.033) exportactionlogsworker stdout | 2025-02-14 01:49:14,531 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:49:14,637 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} notificationworker stdout | 2025-02-14 01:49:15,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:49:15,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:25.803718+00:00 (in 9.999569 seconds) notificationworker stdout | 2025-02-14 01:49:15,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:25 UTC)" (scheduled at 2025-02-14 01:49:15.803718+00:00) notificationworker stdout | 2025-02-14 01:49:15,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:49:15,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 49, 15, 804429), True, datetime.datetime(2025, 2, 14, 1, 49, 15, 804429), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:49:15,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:49:15,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:49:15,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:25 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:49:16,715 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} namespacegcworker stdout | 2025-02-14 01:49:20,076 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:49:20,403 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} expiredappspecifictokenworker stdout | 2025-02-14 01:49:22,220 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} autopruneworker stdout | 2025-02-14 01:49:22,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:49:22,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:52.310342+00:00 (in 29.999568 seconds) autopruneworker stdout | 2025-02-14 01:49:22,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:52 UTC)" (scheduled at 2025-02-14 01:49:22.310342+00:00) autopruneworker stdout | 2025-02-14 01:49:22,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494162316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:49:22,320 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:49:22,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:49:22,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:49:23,095 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:49:24,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:49:24,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:29.232325+00:00 (in 5.000655 seconds) securityworker stdout | 2025-02-14 01:49:24,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:54 UTC)" (scheduled at 2025-02-14 01:49:24.231161+00:00) securityworker stdout | 2025-02-14 01:49:24,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:49:24,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:49:24,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:49:24,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:49:24,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:49:24,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:49:24,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:54 UTC)" executed successfully gcworker stdout | 2025-02-14 01:49:24,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:49:24,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:25.392556+00:00 (in 1.001714 seconds) gcworker stdout | 2025-02-14 01:49:24,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:54 UTC)" (scheduled at 2025-02-14 01:49:24.390410+00:00) gcworker stdout | 2025-02-14 01:49:24,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:49:24,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:54 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:49:24,945 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} exportactionlogsworker stdout | 2025-02-14 01:49:25,215 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:49:25,215 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:30.212654+00:00 (in 4.996943 seconds) exportactionlogsworker stdout | 2025-02-14 01:49:25,215 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:25 UTC)" (scheduled at 2025-02-14 01:49:25.215238+00:00) exportactionlogsworker stdout | 2025-02-14 01:49:25,215 [63] [DEBUG] [workers.queueworker] Running watchdog. exportactionlogsworker stdout | 2025-02-14 01:49:25,216 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:49:25,316 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} gcworker stdout | 2025-02-14 01:49:25,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:49:25,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:54.390410+00:00 (in 28.997425 seconds) gcworker stdout | 2025-02-14 01:49:25,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:55 UTC)" (scheduled at 2025-02-14 01:49:25.392556+00:00) gcworker stdout | 2025-02-14 01:49:25,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:49:25,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497465401, None, 1, 0]) gcworker stdout | 2025-02-14 01:49:25,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:49:25,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:55 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:49:25,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:49:25,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:35.803718+00:00 (in 9.999552 seconds) notificationworker stdout | 2025-02-14 01:49:25,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:35 UTC)" (scheduled at 2025-02-14 01:49:25.803718+00:00) notificationworker stdout | 2025-02-14 01:49:25,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:49:25,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 49, 25, 804374), True, datetime.datetime(2025, 2, 14, 1, 49, 25, 804374), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:49:25,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:49:25,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:49:25,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:35 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:49:26,066 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:49:26,545 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:49:26,939 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:49:27,278 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:49:27,617 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:49:27,730 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:49:28,007 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:49:28,280 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:49:28,395 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} repositorygcworker stdout | 2025-02-14 01:49:28,809 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gunicorn-web stdout | 2025-02-14 01:49:29,007 [242] [DEBUG] [app] Starting request: urn:request:af0ab288-d5dc-49ca-ba42-815b9b24e6a6 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:49:29,008 [244] [DEBUG] [app] Starting request: urn:request:0bb55d04-be23-40e2-ba5d-a6c738d53bb6 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:49:29,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:29,009 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:29,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:29,012 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:49:29,012 [246] [DEBUG] [app] Starting request: urn:request:68254981-3735-43e3-88ed-7162052fa55c (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:49:29,013 [246] [DEBUG] [app] Ending request: urn:request:68254981-3735-43e3-88ed-7162052fa55c (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:68254981-3735-43e3-88ed-7162052fa55c', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:49:29,013 [252] [DEBUG] [app] Starting request: urn:request:aceeb178-37e9-4970-b076-b1609a7cc702 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:49:29,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:49:29,013 [252] [DEBUG] [app] Ending request: urn:request:aceeb178-37e9-4970-b076-b1609a7cc702 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:aceeb178-37e9-4970-b076-b1609a7cc702', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:49:29,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-registry stdout | 2025-02-14 01:49:29,014 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:49:29,014 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:49:29,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:29,015 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:29,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:29,017 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:29,017 [245] [DEBUG] [app] Starting request: urn:request:b354526b-21e5-4d2b-adc5-a647e5f7517d (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:49:29,018 [243] [DEBUG] [app] Starting request: urn:request:436c6d86-ca38-4374-b36d-830059d52661 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:49:29,018 [245] [DEBUG] [app] Ending request: urn:request:b354526b-21e5-4d2b-adc5-a647e5f7517d (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:b354526b-21e5-4d2b-adc5-a647e5f7517d', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:49:29,018 [243] [DEBUG] [app] Ending request: urn:request:436c6d86-ca38-4374-b36d-830059d52661 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:436c6d86-ca38-4374-b36d-830059d52661', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:49:29,018 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:49:29,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:49:29,018 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:49:29,018 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:49:29,019 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:49:29,019 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:49:29,019 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:49:29,019 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:49:29,024 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:49:29,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:49:29,024 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:49:29,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:49:29,031 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:49:29,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:49:29,034 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:49:29,034 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:49:29,036 [244] [DEBUG] [app] Ending request: urn:request:0bb55d04-be23-40e2-ba5d-a6c738d53bb6 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:0bb55d04-be23-40e2-ba5d-a6c738d53bb6', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:49:29,036 [242] [DEBUG] [app] Ending request: urn:request:af0ab288-d5dc-49ca-ba42-815b9b24e6a6 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:af0ab288-d5dc-49ca-ba42-815b9b24e6a6', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:49:29,036 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:49:29,036 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:49:29,036 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:49:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:49:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.029 47 0.029) gunicorn-web stdout | 2025-02-14 01:49:29,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:49:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:49:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.031) securityworker stdout | 2025-02-14 01:49:29,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:49:29,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:54.231161+00:00 (in 24.998337 seconds) securityworker stdout | 2025-02-14 01:49:29,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:59 UTC)" (scheduled at 2025-02-14 01:49:29.232325+00:00) securityworker stdout | 2025-02-14 01:49:29,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:49:29,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:49:29,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:49:29,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:49:29,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:49:29,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:49:29,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:49:29,244 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:49:29,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:29,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:29,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:49:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:49:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:49:29,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:49:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:49:29,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:49:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:49:29,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:49:29,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:49:29,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:49:29,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 44, 29, 236585), 1, 2]) securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:29,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:29,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:49:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:49:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:49:29,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:49:29,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:49:29,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:49:29,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:49:29,251 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:49:29,251 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:49:29,251 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:49:29,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 44, 29, 236585), 1, 2]) securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:29,254 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:29,254 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:49:29,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:49:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:49:29,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:49:29,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:49:29,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:49:29,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:49:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:49:29,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:49:59 UTC)" executed successfully gcworker stdout | 2025-02-14 01:49:29,586 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} exportactionlogsworker stdout | 2025-02-14 01:49:30,212 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:49:30,213 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:39.215004+00:00 (in 9.001914 seconds) exportactionlogsworker stdout | 2025-02-14 01:49:30,213 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:30 UTC)" (scheduled at 2025-02-14 01:49:30.212654+00:00) exportactionlogsworker stdout | 2025-02-14 01:49:30,213 [63] [DEBUG] [workers.queueworker] Getting work item from queue. exportactionlogsworker stdout | 2025-02-14 01:49:30,214 [63] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 49, 30, 213391), True, datetime.datetime(2025, 2, 14, 1, 49, 30, 213391), 0, 'exportactionlogs/%', 50, 1, 0]) exportactionlogsworker stdout | 2025-02-14 01:49:30,222 [63] [DEBUG] [workers.queueworker] No more work. exportactionlogsworker stdout | 2025-02-14 01:49:30,223 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:49:30,223 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:30 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:49:31,246 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:49:31,251 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:49:31,254 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:49:31,256 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:49:31,259 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:49:31,303 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:49:32,139 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:49:32,503 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} repositorygcworker stdout | 2025-02-14 01:49:33,011 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:49:33,012 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:36.014770+00:00 (in 3.002691 seconds) repositorygcworker stdout | 2025-02-14 01:49:33,012 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:33 UTC)" (scheduled at 2025-02-14 01:49:33.011632+00:00) repositorygcworker stdout | 2025-02-14 01:49:33,012 [85] [DEBUG] [workers.queueworker] Getting work item from queue. repositorygcworker stdout | 2025-02-14 01:49:33,013 [85] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 49, 33, 12359), True, datetime.datetime(2025, 2, 14, 1, 49, 33, 12359), 0, 'repositorygc/%', 50, 1, 0]) repositorygcworker stdout | 2025-02-14 01:49:33,022 [85] [DEBUG] [workers.queueworker] No more work. repositorygcworker stdout | 2025-02-14 01:49:33,022 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:49:33,022 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:33 UTC)" executed successfully gunicorn-secscan stdout | 2025-02-14 01:49:33,165 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:49:33,168 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:49:33,171 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:49:34,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:49:34,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:04.000511+00:00 (in 29.999527 seconds) buildlogsarchiver stdout | 2025-02-14 01:49:34,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:04 UTC)" (scheduled at 2025-02-14 01:49:34.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:49:34,002 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 49, 34, 1273), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:49:34,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:49:34,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:49:34,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:04 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:49:34,444 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:49:34,447 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:49:34,450 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:49:34,452 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:49:34,455 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:49:34,458 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:49:34,460 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:49:34,517 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:49:34,524 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:49:35,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:49:35,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:41.806837+00:00 (in 6.002674 seconds) notificationworker stdout | 2025-02-14 01:49:35,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:45 UTC)" (scheduled at 2025-02-14 01:49:35.803718+00:00) notificationworker stdout | 2025-02-14 01:49:35,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:49:35,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 49, 35, 804366), True, datetime.datetime(2025, 2, 14, 1, 49, 35, 804366), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:49:35,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:49:35,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:49:35,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:45 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:49:36,014 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:49:36,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:43.014615+00:00 (in 6.999419 seconds) repositorygcworker stdout | 2025-02-14 01:49:36,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:36 UTC)" (scheduled at 2025-02-14 01:49:36.014770+00:00) repositorygcworker stdout | 2025-02-14 01:49:36,015 [85] [DEBUG] [workers.queueworker] Running watchdog. repositorygcworker stdout | 2025-02-14 01:49:36,015 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:36 UTC)" executed successfully exportactionlogsworker stdout | 2025-02-14 01:49:39,215 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:49:39,215 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:25.215238+00:00 (in 45.999769 seconds) exportactionlogsworker stdout | 2025-02-14 01:49:39,215 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:54:39 UTC)" (scheduled at 2025-02-14 01:49:39.215004+00:00) exportactionlogsworker stdout | 2025-02-14 01:49:39,216 [63] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [False, datetime.datetime(2025, 2, 14, 1, 49, 39, 215734), 'exportactionlogs/%']) exportactionlogsworker stdout | 2025-02-14 01:49:39,225 [63] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 49, 39, 215734), True, datetime.datetime(2025, 2, 14, 1, 49, 39, 215734), 0, 'exportactionlogs/%']) exportactionlogsworker stdout | 2025-02-14 01:49:39,227 [63] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) AND NOT ("t1"."queue_name" IN (SELECT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s)))))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 49, 39, 215734), True, datetime.datetime(2025, 2, 14, 1, 49, 39, 215734), 0, 'exportactionlogs/%', False, datetime.datetime(2025, 2, 14, 1, 49, 39, 215734), 'exportactionlogs/%']) exportactionlogsworker stdout | 2025-02-14 01:49:39,230 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:49:39,230 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:54:39 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:49:41,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:49:41,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:45.803718+00:00 (in 3.996438 seconds) notificationworker stdout | 2025-02-14 01:49:41,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:54:41 UTC)" (scheduled at 2025-02-14 01:49:41.806837+00:00) notificationworker stdout | 2025-02-14 01:49:41,807 [75] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [False, datetime.datetime(2025, 2, 14, 1, 49, 41, 807490), 'notification/%']) notificationworker stdout | 2025-02-14 01:49:41,817 [75] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 49, 41, 807490), True, datetime.datetime(2025, 2, 14, 1, 49, 41, 807490), 0, 'notification/%']) notificationworker stdout | 2025-02-14 01:49:41,820 [75] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) AND NOT ("t1"."queue_name" IN (SELECT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s)))))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 49, 41, 807490), True, datetime.datetime(2025, 2, 14, 1, 49, 41, 807490), 0, 'notification/%', False, datetime.datetime(2025, 2, 14, 1, 49, 41, 807490), 'notification/%']) notificationworker stdout | 2025-02-14 01:49:41,822 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:49:41,822 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:54:41 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:49:43,014 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:49:43,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:33.011632+00:00 (in 49.996511 seconds) repositorygcworker stdout | 2025-02-14 01:49:43,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:54:43 UTC)" (scheduled at 2025-02-14 01:49:43.014615+00:00) repositorygcworker stdout | 2025-02-14 01:49:43,015 [85] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [False, datetime.datetime(2025, 2, 14, 1, 49, 43, 15382), 'repositorygc/%']) repositorygcworker stdout | 2025-02-14 01:49:43,024 [85] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 49, 43, 15382), True, datetime.datetime(2025, 2, 14, 1, 49, 43, 15382), 0, 'repositorygc/%']) repositorygcworker stdout | 2025-02-14 01:49:43,027 [85] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) AND NOT ("t1"."queue_name" IN (SELECT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s)))))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 49, 43, 15382), True, datetime.datetime(2025, 2, 14, 1, 49, 43, 15382), 0, 'repositorygc/%', False, datetime.datetime(2025, 2, 14, 1, 49, 43, 15382), 'repositorygc/%']) repositorygcworker stdout | 2025-02-14 01:49:43,030 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:49:43,030 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:54:43 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:49:44,007 [242] [DEBUG] [app] Starting request: urn:request:ea44d42c-ce9e-406d-95dd-cd575b39bff8 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:49:44,007 [244] [DEBUG] [app] Starting request: urn:request:65561886-3133-44de-ab72-f02c4d7b7d95 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:49:44,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:44,009 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:44,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:44,011 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:49:44,012 [246] [DEBUG] [app] Starting request: urn:request:9a5baa10-cdda-4a23-83f7-c3ffec587aa6 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:49:44,012 [246] [DEBUG] [app] Ending request: urn:request:9a5baa10-cdda-4a23-83f7-c3ffec587aa6 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:9a5baa10-cdda-4a23-83f7-c3ffec587aa6', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:49:44,012 [252] [DEBUG] [app] Starting request: urn:request:e5c4eaad-3dbc-4012-98fe-46a2eeca74ed (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:49:44,012 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:49:44,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:49:44,013 [252] [DEBUG] [app] Ending request: urn:request:e5c4eaad-3dbc-4012-98fe-46a2eeca74ed (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:e5c4eaad-3dbc-4012-98fe-46a2eeca74ed', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:49:44,013 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-web stdout | 2025-02-14 01:49:44,013 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:49:44,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:44,014 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:44,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:44,016 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:44,017 [245] [DEBUG] [app] Starting request: urn:request:27a26bf7-4d5c-4165-98ad-6cccd95486ad (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:49:44,017 [245] [DEBUG] [app] Ending request: urn:request:27a26bf7-4d5c-4165-98ad-6cccd95486ad (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:27a26bf7-4d5c-4165-98ad-6cccd95486ad', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:49:44,017 [243] [DEBUG] [app] Starting request: urn:request:8ba8be1d-4857-44ac-a814-e55894539806 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:49:44,017 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:49:44,017 [243] [DEBUG] [app] Ending request: urn:request:8ba8be1d-4857-44ac-a814-e55894539806 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:8ba8be1d-4857-44ac-a814-e55894539806', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:49:44,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:49:44,018 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:49:44,018 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:49:44,018 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:49:44,018 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:49:44,018 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:49:44,018 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:49:44,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:49:44,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:49:44,024 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:49:44,024 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:49:44,031 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:49:44,032 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:49:44,034 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:49:44,034 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:49:44,036 [244] [DEBUG] [app] Ending request: urn:request:65561886-3133-44de-ab72-f02c4d7b7d95 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:65561886-3133-44de-ab72-f02c4d7b7d95', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:49:44,036 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:49:44,036 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:49:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:49:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.031) gunicorn-web stdout | 2025-02-14 01:49:44,037 [242] [DEBUG] [app] Ending request: urn:request:ea44d42c-ce9e-406d-95dd-cd575b39bff8 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:ea44d42c-ce9e-406d-95dd-cd575b39bff8', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:49:44,037 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:49:44,037 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:49:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:49:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.032) exportactionlogsworker stdout | 2025-02-14 01:49:44,567 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:49:44,673 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} namespacegcworker stdout | 2025-02-14 01:49:45,503 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:49:45,504 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:12.505687+00:00 (in 27.001525 seconds) namespacegcworker stdout | 2025-02-14 01:49:45,504 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:45 UTC)" (scheduled at 2025-02-14 01:49:45.503718+00:00) namespacegcworker stdout | 2025-02-14 01:49:45,504 [73] [DEBUG] [workers.queueworker] Getting work item from queue. namespacegcworker stdout | 2025-02-14 01:49:45,505 [73] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 49, 45, 504440), True, datetime.datetime(2025, 2, 14, 1, 49, 45, 504440), 0, 'namespacegc/%', 50, 1, 0]) namespacegcworker stdout | 2025-02-14 01:49:45,514 [73] [DEBUG] [workers.queueworker] No more work. namespacegcworker stdout | 2025-02-14 01:49:45,514 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:49:45,514 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:45 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:49:45,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:49:45,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:55.803718+00:00 (in 9.999558 seconds) notificationworker stdout | 2025-02-14 01:49:45,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:55 UTC)" (scheduled at 2025-02-14 01:49:45.803718+00:00) notificationworker stdout | 2025-02-14 01:49:45,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:49:45,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 49, 45, 804360), True, datetime.datetime(2025, 2, 14, 1, 49, 45, 804360), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:49:45,813 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:49:45,813 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:49:45,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:49:55 UTC)" executed successfully quotaregistrysizeworker stdout | 2025-02-14 01:49:46,009 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:49:46,010 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:46.009738+00:00 (in 59.999577 seconds) quotaregistrysizeworker stdout | 2025-02-14 01:49:46,010 [78] [INFO] [apscheduler.executors.default] Running job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:46 UTC)" (scheduled at 2025-02-14 01:49:46.009738+00:00) quotaregistrysizeworker stdout | 2025-02-14 01:49:46,010 [78] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."size_bytes", "t1"."running", "t1"."queued", "t1"."completed_ms" FROM "quotaregistrysize" AS "t1" LIMIT %s OFFSET %s', [1, 0]) quotaregistrysizeworker stdout | 2025-02-14 01:49:46,018 [78] [DEBUG] [data.database] Disconnecting from database. quotaregistrysizeworker stdout | 2025-02-14 01:49:46,018 [78] [INFO] [apscheduler.executors.default] Job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:46 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:49:46,737 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:49:48,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:49:48,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:59.123196+00:00 (in 10.997567 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:49:48,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:48 UTC)" (scheduled at 2025-02-14 01:49:48.125163+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:49:48,125 [87] [DEBUG] [workers.queueworker] Running watchdog. securityscanningnotificationworker stdout | 2025-02-14 01:49:48,125 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:48 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:49:50,112 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:49:50,435 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} expiredappspecifictokenworker stdout | 2025-02-14 01:49:52,256 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} autopruneworker stdout | 2025-02-14 01:49:52,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:49:52,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:22.310342+00:00 (in 29.999587 seconds) autopruneworker stdout | 2025-02-14 01:49:52,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:22 UTC)" (scheduled at 2025-02-14 01:49:52.310342+00:00) autopruneworker stdout | 2025-02-14 01:49:52,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494192316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:49:52,321 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:49:52,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:49:52,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:22 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:49:52,900 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:49:52,901 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:05.898886+00:00 (in 12.997812 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:49:52,901 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:52 UTC)" (scheduled at 2025-02-14 01:49:52.900596+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:49:52,901 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."artifact_type_backfilled" = %s) OR ("t1"."artifact_type_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:49:52,910 [71] [DEBUG] [__main__] Manifest artifact_type backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:49:52,910 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:49:52,910 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:49:53,115 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:49:54,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:49:54,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:59.232325+00:00 (in 5.000715 seconds) securityworker stdout | 2025-02-14 01:49:54,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:24 UTC)" (scheduled at 2025-02-14 01:49:54.231161+00:00) securityworker stdout | 2025-02-14 01:49:54,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:49:54,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:49:54,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:49:54,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:49:54,243 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:49:54,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:49:54,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:49:54,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:49:54,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:49:55.392556+00:00 (in 1.001715 seconds) gcworker stdout | 2025-02-14 01:49:54,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:24 UTC)" (scheduled at 2025-02-14 01:49:54.390410+00:00) gcworker stdout | 2025-02-14 01:49:54,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:49:54,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:24 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:49:54,981 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} notificationworker stdout | 2025-02-14 01:49:55,341 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} gcworker stdout | 2025-02-14 01:49:55,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:49:55,393 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:24.390410+00:00 (in 28.997417 seconds) gcworker stdout | 2025-02-14 01:49:55,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:25 UTC)" (scheduled at 2025-02-14 01:49:55.392556+00:00) gcworker stdout | 2025-02-14 01:49:55,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:49:55,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497495401, None, 1, 0]) gcworker stdout | 2025-02-14 01:49:55,404 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:49:55,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:49:55,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:49:55,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:05.803718+00:00 (in 9.999564 seconds) notificationworker stdout | 2025-02-14 01:49:55,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:05 UTC)" (scheduled at 2025-02-14 01:49:55.803718+00:00) notificationworker stdout | 2025-02-14 01:49:55,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:49:55,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 49, 55, 804421), True, datetime.datetime(2025, 2, 14, 1, 49, 55, 804421), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:49:55,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:49:55,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:49:55,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:05 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:49:56,095 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:49:56,574 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:49:56,972 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:49:57,299 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:49:57,631 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:49:57,766 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:49:58,027 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:49:58,316 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:49:58,407 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} repositorygcworker stdout | 2025-02-14 01:49:58,845 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gunicorn-web stdout | 2025-02-14 01:49:59,007 [243] [DEBUG] [app] Starting request: urn:request:7be640a2-72c0-44a7-bc66-5f6b6045f60f (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:49:59,008 [242] [DEBUG] [app] Starting request: urn:request:9483d1eb-a284-40a4-9184-d2fedfeff14c (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:49:59,008 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:59,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:59,011 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:59,012 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:49:59,012 [253] [DEBUG] [app] Starting request: urn:request:1b02644e-7661-46f9-8597-56340d19acce (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:49:59,013 [253] [DEBUG] [app] Ending request: urn:request:1b02644e-7661-46f9-8597-56340d19acce (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:1b02644e-7661-46f9-8597-56340d19acce', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:49:59,013 [252] [DEBUG] [app] Starting request: urn:request:f536f3e8-e150-40d9-9fc6-5d183eb52188 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:49:59,013 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-registry stdout | 2025-02-14 01:49:59,013 [252] [DEBUG] [app] Ending request: urn:request:f536f3e8-e150-40d9-9fc6-5d183eb52188 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:f536f3e8-e150-40d9-9fc6-5d183eb52188', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:49:59,013 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:49:59,014 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-web stdout | 2025-02-14 01:49:59,014 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:49:59,015 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:59,016 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:49:59,018 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:59,018 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:49:59,018 [245] [DEBUG] [app] Starting request: urn:request:8892a9dd-3c56-4939-b799-09104fdb3721 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:49:59,019 [245] [DEBUG] [app] Ending request: urn:request:8892a9dd-3c56-4939-b799-09104fdb3721 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:8892a9dd-3c56-4939-b799-09104fdb3721', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:49:59,019 [244] [DEBUG] [app] Starting request: urn:request:3d483542-5ba0-4718-b1de-2704159b54e2 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:49:59,019 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:49:59,019 [244] [DEBUG] [app] Ending request: urn:request:3d483542-5ba0-4718-b1de-2704159b54e2 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:3d483542-5ba0-4718-b1de-2704159b54e2', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:49:59,019 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:49:59,020 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:49:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:49:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:49:59,020 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:49:59,020 [243] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:49:59,020 [243] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:49:59,020 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:49:59,020 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:49:59,026 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:49:59,026 [243] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:49:59,026 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:49:59,026 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:49:59,032 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:49:59,032 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:49:59,035 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:49:59,035 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:49:59,037 [242] [DEBUG] [app] Ending request: urn:request:9483d1eb-a284-40a4-9184-d2fedfeff14c (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:9483d1eb-a284-40a4-9184-d2fedfeff14c', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:49:59,037 [243] [DEBUG] [app] Ending request: urn:request:7be640a2-72c0-44a7-bc66-5f6b6045f60f (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:7be640a2-72c0-44a7-bc66-5f6b6045f60f', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:49:59,037 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:49:59,037 [243] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:49:59,037 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:49:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:49:59,038 [243] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:49:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:49:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:49:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.032) securityscanningnotificationworker stdout | 2025-02-14 01:49:59,123 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:49:59,123 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:20.124914+00:00 (in 21.001263 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:49:59,123 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:59 UTC)" (scheduled at 2025-02-14 01:49:59.123196+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:49:59,123 [87] [DEBUG] [workers.queueworker] Getting work item from queue. securityscanningnotificationworker stdout | 2025-02-14 01:49:59,124 [87] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 49, 59, 123945), True, datetime.datetime(2025, 2, 14, 1, 49, 59, 123945), 0, 'secscanv4/%', 50, 1, 0]) securityscanningnotificationworker stdout | 2025-02-14 01:49:59,133 [87] [DEBUG] [workers.queueworker] No more work. securityscanningnotificationworker stdout | 2025-02-14 01:49:59,133 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:49:59,133 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:50:59 UTC)" executed successfully securityworker stdout | 2025-02-14 01:49:59,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:49:59,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:24.231161+00:00 (in 24.998387 seconds) securityworker stdout | 2025-02-14 01:49:59,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:29 UTC)" (scheduled at 2025-02-14 01:49:59.232325+00:00) securityworker stdout | 2025-02-14 01:49:59,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:49:59,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:49:59,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:49:59,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:49:59,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:49:59,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:49:59,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:49:59,244 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:49:59,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:59,247 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:59,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:49:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:49:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:49:59,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:49:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:49:59,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:49:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:49:59,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:49:59,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:49:59,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:49:59,248 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 44, 59, 236352), 1, 2]) securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:59,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:59,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:49:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:49:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:49:59,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:49:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:49:59,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:49:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:49:59,251 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:49:59,251 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:49:59,251 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:49:59,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 44, 59, 236352), 1, 2]) securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:59,254 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:49:59,254 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:49:59,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:49:59,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:49:59,254 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:49:59,254 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:49:59,254 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:49:59,254 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:49:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:49:59,254 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:29 UTC)" executed successfully gcworker stdout | 2025-02-14 01:49:59,622 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} gunicorn-web stdout | 2025-02-14 01:50:01,255 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:50:01,259 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:50:01,262 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:50:01,264 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:50:01,267 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:50:01,335 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:50:02,148 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:50:02,539 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} gunicorn-secscan stdout | 2025-02-14 01:50:03,173 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:50:03,176 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:50:03,179 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:50:04,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:50:04,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:34.000511+00:00 (in 29.999551 seconds) buildlogsarchiver stdout | 2025-02-14 01:50:04,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:34 UTC)" (scheduled at 2025-02-14 01:50:04.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:50:04,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 50, 4, 1224), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:50:04,010 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:50:04,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:50:04,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:34 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:50:04,454 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:50:04,458 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:50:04,461 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:50:04,464 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:50:04,466 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:50:04,469 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:50:04,472 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:50:04,526 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:50:04,532 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:50:05,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:50:05,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:07.807092+00:00 (in 2.002956 seconds) notificationworker stdout | 2025-02-14 01:50:05,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:15 UTC)" (scheduled at 2025-02-14 01:50:05.803718+00:00) notificationworker stdout | 2025-02-14 01:50:05,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:50:05,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 50, 5, 804410), True, datetime.datetime(2025, 2, 14, 1, 50, 5, 804410), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:50:05,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:50:05,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:50:05,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:15 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:50:05,899 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:50:05,899 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:52.900596+00:00 (in 47.001300 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:50:05,899 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:05 UTC)" (scheduled at 2025-02-14 01:50:05.898886+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:50:05,899 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."subject_backfilled" = %s) OR ("t1"."subject_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:50:05,908 [71] [DEBUG] [__main__] Manifest subject backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:50:05,908 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:50:05,908 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:05 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:50:07,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:50:07,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:15.803718+00:00 (in 7.996176 seconds) notificationworker stdout | 2025-02-14 01:50:07,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:07 UTC)" (scheduled at 2025-02-14 01:50:07.807092+00:00) notificationworker stdout | 2025-02-14 01:50:07,807 [75] [DEBUG] [workers.queueworker] Running watchdog. notificationworker stdout | 2025-02-14 01:50:07,807 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:07 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:50:12,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:50:12,506 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:45.503718+00:00 (in 32.997548 seconds) namespacegcworker stdout | 2025-02-14 01:50:12,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:12 UTC)" (scheduled at 2025-02-14 01:50:12.505687+00:00) namespacegcworker stdout | 2025-02-14 01:50:12,506 [73] [DEBUG] [workers.queueworker] Running watchdog. namespacegcworker stdout | 2025-02-14 01:50:12,506 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:12 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:50:14,007 [242] [DEBUG] [app] Starting request: urn:request:5787a755-5ab2-45b5-b32b-48e4cee99c82 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:50:14,007 [245] [DEBUG] [app] Starting request: urn:request:f72885f2-2359-494c-ad80-bbb3167af4f5 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:50:14,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:14,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:14,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:14,012 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:50:14,012 [253] [DEBUG] [app] Starting request: urn:request:c501607e-cb8e-41ff-bb60-6d1052ea5db4 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:50:14,012 [253] [DEBUG] [app] Ending request: urn:request:c501607e-cb8e-41ff-bb60-6d1052ea5db4 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:c501607e-cb8e-41ff-bb60-6d1052ea5db4', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:50:14,012 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-registry stdout | 2025-02-14 01:50:14,013 [246] [DEBUG] [app] Starting request: urn:request:d8afbc4b-ed0b-4376-b9e2-9548d87e9829 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:50:14,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:50:14,013 [246] [DEBUG] [app] Ending request: urn:request:d8afbc4b-ed0b-4376-b9e2-9548d87e9829 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:d8afbc4b-ed0b-4376-b9e2-9548d87e9829', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:50:14,014 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:50:14,014 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:50:14,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-web stdout | 2025-02-14 01:50:14,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:14,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:14,017 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:14,017 [242] [DEBUG] [app] Starting request: urn:request:799893d1-205f-4ef3-94a3-754388c77034 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:50:14,017 [242] [DEBUG] [app] Ending request: urn:request:799893d1-205f-4ef3-94a3-754388c77034 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:799893d1-205f-4ef3-94a3-754388c77034', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:50:14,018 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:50:14,018 [244] [DEBUG] [app] Starting request: urn:request:8a678060-5bf3-4b1a-b67f-a858e28eeb5e (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:50:14,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:50:14,018 [244] [DEBUG] [app] Ending request: urn:request:8a678060-5bf3-4b1a-b67f-a858e28eeb5e (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:8a678060-5bf3-4b1a-b67f-a858e28eeb5e', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:50:14,018 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:50:14,018 [242] [INFO] [data.database] Connection pooling disabled for postgresql nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:50:14,019 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:50:14,019 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:50:14,019 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:50:14,019 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:50:14,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:50:14,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:50:14,024 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:50:14,024 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:50:14,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:50:14,031 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:50:14,033 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:50:14,034 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:50:14,036 [242] [DEBUG] [app] Ending request: urn:request:5787a755-5ab2-45b5-b32b-48e4cee99c82 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:5787a755-5ab2-45b5-b32b-48e4cee99c82', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:50:14,036 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:50:14,036 [245] [DEBUG] [app] Ending request: urn:request:f72885f2-2359-494c-ad80-bbb3167af4f5 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:f72885f2-2359-494c-ad80-bbb3167af4f5', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:50:14,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:50:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:50:14,036 [245] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:50:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.031) gunicorn-web stdout | 2025-02-14 01:50:14,037 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:50:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:50:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) exportactionlogsworker stdout | 2025-02-14 01:50:14,595 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:50:14,709 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} notificationworker stdout | 2025-02-14 01:50:15,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:50:15,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:25.803718+00:00 (in 9.999553 seconds) notificationworker stdout | 2025-02-14 01:50:15,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:25 UTC)" (scheduled at 2025-02-14 01:50:15.803718+00:00) notificationworker stdout | 2025-02-14 01:50:15,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:50:15,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 50, 15, 804371), True, datetime.datetime(2025, 2, 14, 1, 50, 15, 804371), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:50:15,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:50:15,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:50:15,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:25 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:50:16,772 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:50:20,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:50:20,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:48.125163+00:00 (in 27.999767 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:50:20,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:55:20 UTC)" (scheduled at 2025-02-14 01:50:20.124914+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:50:20,126 [87] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [False, datetime.datetime(2025, 2, 14, 1, 50, 20, 125662), 'secscanv4/%']) namespacegcworker stdout | 2025-02-14 01:50:20,133 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} securityscanningnotificationworker stdout | 2025-02-14 01:50:20,135 [87] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 50, 20, 125662), True, datetime.datetime(2025, 2, 14, 1, 50, 20, 125662), 0, 'secscanv4/%']) securityscanningnotificationworker stdout | 2025-02-14 01:50:20,138 [87] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) AND NOT ("t1"."queue_name" IN (SELECT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s)))))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 50, 20, 125662), True, datetime.datetime(2025, 2, 14, 1, 50, 20, 125662), 0, 'secscanv4/%', False, datetime.datetime(2025, 2, 14, 1, 50, 20, 125662), 'secscanv4/%']) securityscanningnotificationworker stdout | 2025-02-14 01:50:20,140 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:50:20,140 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:55:20 UTC)" executed successfully teamsyncworker stdout | 2025-02-14 01:50:20,471 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} expiredappspecifictokenworker stdout | 2025-02-14 01:50:22,276 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} autopruneworker stdout | 2025-02-14 01:50:22,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:50:22,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:52.310342+00:00 (in 29.999575 seconds) autopruneworker stdout | 2025-02-14 01:50:22,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:52 UTC)" (scheduled at 2025-02-14 01:50:22.310342+00:00) autopruneworker stdout | 2025-02-14 01:50:22,316 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494222316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:50:22,320 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:50:22,320 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:50:22,320 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:50:23,148 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:50:24,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:50:24,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:29.232325+00:00 (in 5.000714 seconds) securityworker stdout | 2025-02-14 01:50:24,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:54 UTC)" (scheduled at 2025-02-14 01:50:24.231161+00:00) securityworker stdout | 2025-02-14 01:50:24,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:50:24,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:50:24,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:50:24,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:50:24,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:50:24,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:50:24,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:54 UTC)" executed successfully gcworker stdout | 2025-02-14 01:50:24,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:50:24,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:25.392556+00:00 (in 1.001722 seconds) gcworker stdout | 2025-02-14 01:50:24,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:54 UTC)" (scheduled at 2025-02-14 01:50:24.390410+00:00) gcworker stdout | 2025-02-14 01:50:24,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:50:24,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:54 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:50:24,997 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} exportactionlogsworker stdout | 2025-02-14 01:50:25,215 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:50:25,215 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:30.212654+00:00 (in 4.996956 seconds) exportactionlogsworker stdout | 2025-02-14 01:50:25,215 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:25 UTC)" (scheduled at 2025-02-14 01:50:25.215238+00:00) exportactionlogsworker stdout | 2025-02-14 01:50:25,215 [63] [DEBUG] [workers.queueworker] Running watchdog. exportactionlogsworker stdout | 2025-02-14 01:50:25,215 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:50:25,378 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} gcworker stdout | 2025-02-14 01:50:25,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:50:25,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:54.390410+00:00 (in 28.997459 seconds) gcworker stdout | 2025-02-14 01:50:25,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:55 UTC)" (scheduled at 2025-02-14 01:50:25.392556+00:00) gcworker stdout | 2025-02-14 01:50:25,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:50:25,401 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497525401, None, 1, 0]) gcworker stdout | 2025-02-14 01:50:25,404 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:50:25,404 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:55 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:50:25,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:50:25,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:35.803718+00:00 (in 9.999533 seconds) notificationworker stdout | 2025-02-14 01:50:25,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:35 UTC)" (scheduled at 2025-02-14 01:50:25.803718+00:00) notificationworker stdout | 2025-02-14 01:50:25,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:50:25,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 50, 25, 804494), True, datetime.datetime(2025, 2, 14, 1, 50, 25, 804494), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:50:25,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:50:25,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:50:25,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:35 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:50:26,131 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:50:26,609 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:50:26,987 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:50:27,314 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:50:27,646 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:50:27,803 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:50:28,051 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:50:28,335 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:50:28,441 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} repositorygcworker stdout | 2025-02-14 01:50:28,861 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gunicorn-web stdout | 2025-02-14 01:50:29,006 [245] [DEBUG] [app] Starting request: urn:request:b618db58-94ef-411e-8be5-3028273466d4 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:50:29,007 [244] [DEBUG] [app] Starting request: urn:request:df5324c3-f190-4919-9420-dce5e9894463 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:50:29,008 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:29,009 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:29,010 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:29,011 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:50:29,011 [253] [DEBUG] [app] Starting request: urn:request:6b556fda-745b-4488-b606-8c62b211f161 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:50:29,012 [253] [DEBUG] [app] Ending request: urn:request:6b556fda-745b-4488-b606-8c62b211f161 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:6b556fda-745b-4488-b606-8c62b211f161', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-registry stdout | 2025-02-14 01:50:29,012 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:50:29,012 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:50:29,013 [251] [DEBUG] [app] Starting request: urn:request:2cc7178f-df4c-44b2-8bd3-1c2846a81fb1 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:50:29,014 [251] [DEBUG] [app] Ending request: urn:request:2cc7178f-df4c-44b2-8bd3-1c2846a81fb1 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:2cc7178f-df4c-44b2-8bd3-1c2846a81fb1', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.003 162 0.003) gunicorn-registry stdout | 2025-02-14 01:50:29,015 [251] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:50:29,015 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:50:29,016 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:29,016 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:29,018 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:29,018 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:29,019 [242] [DEBUG] [app] Starting request: urn:request:2d833816-f7d8-4d89-8601-f2d06bbc0cf7 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:50:29,019 [243] [DEBUG] [app] Starting request: urn:request:0a15124e-84f4-406f-806b-5a20c4d283ea (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:50:29,019 [242] [DEBUG] [app] Ending request: urn:request:2d833816-f7d8-4d89-8601-f2d06bbc0cf7 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:2d833816-f7d8-4d89-8601-f2d06bbc0cf7', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:50:29,019 [243] [DEBUG] [app] Ending request: urn:request:0a15124e-84f4-406f-806b-5a20c4d283ea (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:0a15124e-84f4-406f-806b-5a20c4d283ea', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:50:29,019 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:50:29,019 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:50:29,019 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:50:29,019 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:50:29,020 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:50:29,020 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:50:29,020 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:50:29,020 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:50:29,025 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:50:29,025 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:50:29,025 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:50:29,025 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:50:29,032 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:50:29,032 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:50:29,034 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:50:29,035 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:50:29,037 [244] [DEBUG] [app] Ending request: urn:request:df5324c3-f190-4919-9420-dce5e9894463 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:df5324c3-f190-4919-9420-dce5e9894463', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:50:29,037 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:50:29,037 [245] [DEBUG] [app] Ending request: urn:request:b618db58-94ef-411e-8be5-3028273466d4 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:b618db58-94ef-411e-8be5-3028273466d4', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:50:29,037 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:50:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:50:29,037 [245] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:50:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.031) gunicorn-web stdout | 2025-02-14 01:50:29,038 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:50:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:50:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.033 47 0.032) securityworker stdout | 2025-02-14 01:50:29,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:50:29,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:54.231161+00:00 (in 24.998390 seconds) securityworker stdout | 2025-02-14 01:50:29,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:59 UTC)" (scheduled at 2025-02-14 01:50:29.232325+00:00) securityworker stdout | 2025-02-14 01:50:29,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:50:29,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:50:29,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:50:29,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:50:29,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:50:29,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:50:29,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:50:29,244 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:50:29,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:29,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:29,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:50:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:50:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:50:29,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:50:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:50:29,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:50:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:50:29,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:50:29,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:50:29,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:50:29,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 45, 29, 236481), 1, 2]) securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:29,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:29,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:50:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:50:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:50:29,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:50:29,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:50:29,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:50:29,252 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:50:29,252 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:50:29,252 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:50:29,252 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:50:29,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 45, 29, 236481), 1, 2]) securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:29,255 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:29,255 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:50:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:50:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:50:29,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:50:29,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:50:29,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:50:29,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:50:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:50:29,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:50:59 UTC)" executed successfully gcworker stdout | 2025-02-14 01:50:29,635 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} exportactionlogsworker stdout | 2025-02-14 01:50:30,212 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:50:30,213 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:25.215238+00:00 (in 55.002137 seconds) exportactionlogsworker stdout | 2025-02-14 01:50:30,213 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:30 UTC)" (scheduled at 2025-02-14 01:50:30.212654+00:00) exportactionlogsworker stdout | 2025-02-14 01:50:30,213 [63] [DEBUG] [workers.queueworker] Getting work item from queue. exportactionlogsworker stdout | 2025-02-14 01:50:30,214 [63] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 50, 30, 213405), True, datetime.datetime(2025, 2, 14, 1, 50, 30, 213405), 0, 'exportactionlogs/%', 50, 1, 0]) exportactionlogsworker stdout | 2025-02-14 01:50:30,223 [63] [DEBUG] [workers.queueworker] No more work. exportactionlogsworker stdout | 2025-02-14 01:50:30,223 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:50:30,223 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:30 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:50:31,263 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:50:31,267 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:50:31,269 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:50:31,272 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:50:31,275 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:50:31,350 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:50:32,179 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:50:32,562 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} repositorygcworker stdout | 2025-02-14 01:50:33,011 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:50:33,012 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:36.014770+00:00 (in 3.002685 seconds) repositorygcworker stdout | 2025-02-14 01:50:33,012 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:33 UTC)" (scheduled at 2025-02-14 01:50:33.011632+00:00) repositorygcworker stdout | 2025-02-14 01:50:33,012 [85] [DEBUG] [workers.queueworker] Getting work item from queue. repositorygcworker stdout | 2025-02-14 01:50:33,013 [85] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 50, 33, 12371), True, datetime.datetime(2025, 2, 14, 1, 50, 33, 12371), 0, 'repositorygc/%', 50, 1, 0]) repositorygcworker stdout | 2025-02-14 01:50:33,022 [85] [DEBUG] [workers.queueworker] No more work. repositorygcworker stdout | 2025-02-14 01:50:33,022 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:50:33,022 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:33 UTC)" executed successfully gunicorn-secscan stdout | 2025-02-14 01:50:33,181 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:50:33,185 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:50:33,187 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:50:34,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:50:34,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:04.000511+00:00 (in 29.999568 seconds) buildlogsarchiver stdout | 2025-02-14 01:50:34,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:04 UTC)" (scheduled at 2025-02-14 01:50:34.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:50:34,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 50, 34, 1203), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:50:34,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:50:34,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:50:34,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:04 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:50:34,464 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:50:34,466 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:50:34,469 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:50:34,474 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:50:34,476 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:50:34,479 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:50:34,481 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:50:34,536 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:50:34,540 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:50:35,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:50:35,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:45.803718+00:00 (in 9.999559 seconds) notificationworker stdout | 2025-02-14 01:50:35,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:45 UTC)" (scheduled at 2025-02-14 01:50:35.803718+00:00) notificationworker stdout | 2025-02-14 01:50:35,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:50:35,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 50, 35, 804357), True, datetime.datetime(2025, 2, 14, 1, 50, 35, 804357), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:50:35,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:50:35,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:50:35,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:45 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:50:36,014 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:50:36,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:33.011632+00:00 (in 56.996374 seconds) repositorygcworker stdout | 2025-02-14 01:50:36,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:36 UTC)" (scheduled at 2025-02-14 01:50:36.014770+00:00) repositorygcworker stdout | 2025-02-14 01:50:36,015 [85] [DEBUG] [workers.queueworker] Running watchdog. repositorygcworker stdout | 2025-02-14 01:50:36,015 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:36 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:50:44,007 [245] [DEBUG] [app] Starting request: urn:request:47037912-8bb4-4b54-afa5-11728dadef89 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:50:44,007 [242] [DEBUG] [app] Starting request: urn:request:e6329ed0-8171-46f0-a507-1ae06a570b84 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:50:44,008 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:44,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:44,010 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:44,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:50:44,012 [246] [DEBUG] [app] Starting request: urn:request:48b7127e-e438-4ae7-8b3a-4cc81bb6c01d (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:50:44,012 [246] [DEBUG] [app] Ending request: urn:request:48b7127e-e438-4ae7-8b3a-4cc81bb6c01d (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:48b7127e-e438-4ae7-8b3a-4cc81bb6c01d', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:50:44,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:50:44,013 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:50:44,013 [250] [DEBUG] [app] Starting request: urn:request:cc6ee473-7bdc-4b63-83c7-21863ab9764c (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:50:44,014 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-registry stdout | 2025-02-14 01:50:44,014 [250] [DEBUG] [app] Ending request: urn:request:cc6ee473-7bdc-4b63-83c7-21863ab9764c (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:cc6ee473-7bdc-4b63-83c7-21863ab9764c', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:50:44,015 [250] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.004 162 0.004) gunicorn-web stdout | 2025-02-14 01:50:44,016 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:50:44,016 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:44,017 [243] [DEBUG] [app] Starting request: urn:request:73e547f0-4a96-4984-a424-2114e7cf2162 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:50:44,017 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:44,017 [243] [DEBUG] [app] Ending request: urn:request:73e547f0-4a96-4984-a424-2114e7cf2162 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:73e547f0-4a96-4984-a424-2114e7cf2162', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:50:44,018 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:50:44,018 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:50:44,018 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:50:44,018 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:50:44,019 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:44,020 [243] [DEBUG] [app] Starting request: urn:request:47a276b7-e985-447a-839a-2943f0f40bec (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:50:44,020 [243] [DEBUG] [app] Ending request: urn:request:47a276b7-e985-447a-839a-2943f0f40bec (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:47a276b7-e985-447a-839a-2943f0f40bec', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:50:44,020 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.000 159 0.001) gunicorn-web stdout | 2025-02-14 01:50:44,021 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:50:44,021 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:50:44,021 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:50:44,024 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:50:44,024 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:50:44,026 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:50:44,026 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:50:44,031 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:50:44,033 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:50:44,034 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:50:44,036 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:50:44,037 [245] [DEBUG] [app] Ending request: urn:request:47037912-8bb4-4b54-afa5-11728dadef89 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:47037912-8bb4-4b54-afa5-11728dadef89', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:50:44,037 [245] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:50:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.032) gunicorn-web stdout | 2025-02-14 01:50:44,037 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:50:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:50:44,038 [242] [DEBUG] [app] Ending request: urn:request:e6329ed0-8171-46f0-a507-1ae06a570b84 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:e6329ed0-8171-46f0-a507-1ae06a570b84', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:50:44,038 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:50:44,039 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:50:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:50:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.033 47 0.032) exportactionlogsworker stdout | 2025-02-14 01:50:44,625 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:50:44,745 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} namespacegcworker stdout | 2025-02-14 01:50:45,503 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:50:45,504 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:12.505687+00:00 (in 27.001461 seconds) namespacegcworker stdout | 2025-02-14 01:50:45,504 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:45 UTC)" (scheduled at 2025-02-14 01:50:45.503718+00:00) namespacegcworker stdout | 2025-02-14 01:50:45,504 [73] [DEBUG] [workers.queueworker] Getting work item from queue. namespacegcworker stdout | 2025-02-14 01:50:45,505 [73] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 50, 45, 504557), True, datetime.datetime(2025, 2, 14, 1, 50, 45, 504557), 0, 'namespacegc/%', 50, 1, 0]) namespacegcworker stdout | 2025-02-14 01:50:45,514 [73] [DEBUG] [workers.queueworker] No more work. namespacegcworker stdout | 2025-02-14 01:50:45,514 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:50:45,514 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:45 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:50:45,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:50:45,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:55.803718+00:00 (in 9.999554 seconds) notificationworker stdout | 2025-02-14 01:50:45,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:55 UTC)" (scheduled at 2025-02-14 01:50:45.803718+00:00) notificationworker stdout | 2025-02-14 01:50:45,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:50:45,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 50, 45, 804363), True, datetime.datetime(2025, 2, 14, 1, 50, 45, 804363), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:50:45,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:50:45,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:50:45,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:50:55 UTC)" executed successfully quotaregistrysizeworker stdout | 2025-02-14 01:50:46,009 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:50:46,010 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:46.009738+00:00 (in 59.999538 seconds) quotaregistrysizeworker stdout | 2025-02-14 01:50:46,010 [78] [INFO] [apscheduler.executors.default] Running job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:46 UTC)" (scheduled at 2025-02-14 01:50:46.009738+00:00) quotaregistrysizeworker stdout | 2025-02-14 01:50:46,010 [78] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."size_bytes", "t1"."running", "t1"."queued", "t1"."completed_ms" FROM "quotaregistrysize" AS "t1" LIMIT %s OFFSET %s', [1, 0]) quotaregistrysizeworker stdout | 2025-02-14 01:50:46,018 [78] [DEBUG] [data.database] Disconnecting from database. quotaregistrysizeworker stdout | 2025-02-14 01:50:46,019 [78] [INFO] [apscheduler.executors.default] Job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:46 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:50:46,795 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:50:48,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:50:48,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:59.123196+00:00 (in 10.997535 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:50:48,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:48 UTC)" (scheduled at 2025-02-14 01:50:48.125163+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:50:48,125 [87] [DEBUG] [workers.queueworker] Running watchdog. securityscanningnotificationworker stdout | 2025-02-14 01:50:48,125 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:48 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:50:50,169 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:50:50,507 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} expiredappspecifictokenworker stdout | 2025-02-14 01:50:52,302 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} autopruneworker stdout | 2025-02-14 01:50:52,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:50:52,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:22.310342+00:00 (in 29.999602 seconds) autopruneworker stdout | 2025-02-14 01:50:52,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:22 UTC)" (scheduled at 2025-02-14 01:50:52.310342+00:00) autopruneworker stdout | 2025-02-14 01:50:52,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494252316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:50:52,320 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:50:52,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:50:52,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:22 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:50:52,900 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:50:52,901 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:05.898886+00:00 (in 12.997835 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:50:52,901 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:52 UTC)" (scheduled at 2025-02-14 01:50:52.900596+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:50:52,901 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."artifact_type_backfilled" = %s) OR ("t1"."artifact_type_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:50:52,910 [71] [DEBUG] [__main__] Manifest artifact_type backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:50:52,910 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:50:52,910 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:50:53,184 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:50:54,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:50:54,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:59.232325+00:00 (in 5.000678 seconds) securityworker stdout | 2025-02-14 01:50:54,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:24 UTC)" (scheduled at 2025-02-14 01:50:54.231161+00:00) securityworker stdout | 2025-02-14 01:50:54,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:50:54,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:50:54,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:50:54,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:50:54,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:50:54,247 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:50:54,247 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:50:54,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:50:54,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:50:55.392556+00:00 (in 1.001741 seconds) gcworker stdout | 2025-02-14 01:50:54,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:24 UTC)" (scheduled at 2025-02-14 01:50:54.390410+00:00) gcworker stdout | 2025-02-14 01:50:54,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:50:54,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:24 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:50:55,033 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} gcworker stdout | 2025-02-14 01:50:55,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:50:55,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:24.390410+00:00 (in 28.997444 seconds) gcworker stdout | 2025-02-14 01:50:55,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:25 UTC)" (scheduled at 2025-02-14 01:50:55.392556+00:00) gcworker stdout | 2025-02-14 01:50:55,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:50:55,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497555401, None, 1, 0]) gcworker stdout | 2025-02-14 01:50:55,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:50:55,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:50:55,410 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:50:55,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:50:55,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:05.803718+00:00 (in 9.999548 seconds) notificationworker stdout | 2025-02-14 01:50:55,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:05 UTC)" (scheduled at 2025-02-14 01:50:55.803718+00:00) notificationworker stdout | 2025-02-14 01:50:55,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:50:55,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 50, 55, 804385), True, datetime.datetime(2025, 2, 14, 1, 50, 55, 804385), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:50:55,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:50:55,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:50:55,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:05 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:50:56,167 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:50:56,645 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:50:57,018 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:50:57,346 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:50:57,683 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:50:57,817 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:50:58,074 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:50:58,351 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:50:58,470 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} repositorygcworker stdout | 2025-02-14 01:50:58,897 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gunicorn-web stdout | 2025-02-14 01:50:59,007 [242] [DEBUG] [app] Starting request: urn:request:b81899ba-286d-4d27-8783-e1dce3a13a45 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:50:59,008 [243] [DEBUG] [app] Starting request: urn:request:d3de4f34-8a9e-4f93-8dc8-1eb8e84798ac (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:50:59,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:59,009 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:59,010 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:59,012 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:50:59,012 [246] [DEBUG] [app] Starting request: urn:request:90b37105-8979-4b02-a67b-db4a9e8d5a07 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:50:59,012 [246] [DEBUG] [app] Ending request: urn:request:90b37105-8979-4b02-a67b-db4a9e8d5a07 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:90b37105-8979-4b02-a67b-db4a9e8d5a07', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-registry stdout | 2025-02-14 01:50:59,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-registry stdout | 2025-02-14 01:50:59,013 [253] [DEBUG] [app] Starting request: urn:request:d34fac40-db33-43bd-9db3-927f25fd068b (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:50:59,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:50:59,013 [253] [DEBUG] [app] Ending request: urn:request:d34fac40-db33-43bd-9db3-927f25fd068b (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:d34fac40-db33-43bd-9db3-927f25fd068b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:50:59,013 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:50:59,013 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:50:59,014 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:59,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:50:59,016 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:59,017 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:50:59,017 [245] [DEBUG] [app] Starting request: urn:request:339214c7-e7e1-411f-bd11-3666b1abacf1 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:50:59,018 [245] [DEBUG] [app] Ending request: urn:request:339214c7-e7e1-411f-bd11-3666b1abacf1 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:339214c7-e7e1-411f-bd11-3666b1abacf1', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:50:59,018 [244] [DEBUG] [app] Starting request: urn:request:8cef5430-4e9f-4abc-afbb-406233d980fa (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:50:59,018 [244] [DEBUG] [app] Ending request: urn:request:8cef5430-4e9f-4abc-afbb-406233d980fa (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:8cef5430-4e9f-4abc-afbb-406233d980fa', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:50:59,018 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:50:59,018 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:50:59,018 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:50:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:50:59,019 [243] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:50:59,019 [243] [INFO] [data.database] Connection pooling disabled for postgresql nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:50:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:50:59,019 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:50:59,019 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:50:59,019 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:50:59,024 [243] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:50:59,024 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:50:59,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:50:59,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:50:59,031 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:50:59,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:50:59,034 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:50:59,034 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:50:59,036 [243] [DEBUG] [app] Ending request: urn:request:d3de4f34-8a9e-4f93-8dc8-1eb8e84798ac (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:d3de4f34-8a9e-4f93-8dc8-1eb8e84798ac', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:50:59,036 [243] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:50:59,036 [242] [DEBUG] [app] Ending request: urn:request:b81899ba-286d-4d27-8783-e1dce3a13a45 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:b81899ba-286d-4d27-8783-e1dce3a13a45', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:50:59,036 [243] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:50:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:50:59,036 [242] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:50:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.029 47 0.029) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:50:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.031) gunicorn-web stdout | 2025-02-14 01:50:59,037 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:50:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" securityscanningnotificationworker stdout | 2025-02-14 01:50:59,123 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:50:59,123 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:48.125163+00:00 (in 49.001563 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:50:59,123 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:59 UTC)" (scheduled at 2025-02-14 01:50:59.123196+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:50:59,123 [87] [DEBUG] [workers.queueworker] Getting work item from queue. securityscanningnotificationworker stdout | 2025-02-14 01:50:59,124 [87] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 50, 59, 123863), True, datetime.datetime(2025, 2, 14, 1, 50, 59, 123863), 0, 'secscanv4/%', 50, 1, 0]) securityscanningnotificationworker stdout | 2025-02-14 01:50:59,132 [87] [DEBUG] [workers.queueworker] No more work. securityscanningnotificationworker stdout | 2025-02-14 01:50:59,133 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:50:59,133 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:51:59 UTC)" executed successfully securityworker stdout | 2025-02-14 01:50:59,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:50:59,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:24.231161+00:00 (in 24.998309 seconds) securityworker stdout | 2025-02-14 01:50:59,233 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:29 UTC)" (scheduled at 2025-02-14 01:50:59.232325+00:00) securityworker stdout | 2025-02-14 01:50:59,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:50:59,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:50:59,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:50:59,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:50:59,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:50:59,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:50:59,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:50:59,244 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:50:59,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:59,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:59,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:50:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:50:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:50:59,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:50:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:50:59,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:50:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:50:59,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:50:59,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:50:59,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:50:59,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 45, 59, 236643), 1, 2]) securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:59,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:59,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:50:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:50:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:50:59,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:50:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:50:59,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:50:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:50:59,251 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:50:59,251 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:50:59,251 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:50:59,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 45, 59, 236643), 1, 2]) securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:59,255 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:50:59,255 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:50:59,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:50:59,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:50:59,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:50:59,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:50:59,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:50:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:50:59,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:50:59,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:29 UTC)" executed successfully gcworker stdout | 2025-02-14 01:50:59,671 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} gunicorn-web stdout | 2025-02-14 01:51:01,271 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:51:01,274 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:51:01,277 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:51:01,280 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:51:01,283 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:51:01,385 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:51:02,215 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:51:02,599 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} gunicorn-secscan stdout | 2025-02-14 01:51:03,189 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:51:03,192 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:51:03,194 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:51:04,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:51:04,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:34.000511+00:00 (in 29.999565 seconds) buildlogsarchiver stdout | 2025-02-14 01:51:04,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:34 UTC)" (scheduled at 2025-02-14 01:51:04.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:51:04,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 51, 4, 1207), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:51:04,010 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:51:04,010 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:51:04,010 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:34 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:51:04,474 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:51:04,478 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:51:04,482 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:51:04,484 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:51:04,487 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:51:04,489 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:51:04,493 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:51:04,544 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:51:04,548 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:51:05,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:51:05,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:07.807092+00:00 (in 2.002897 seconds) notificationworker stdout | 2025-02-14 01:51:05,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:15 UTC)" (scheduled at 2025-02-14 01:51:05.803718+00:00) notificationworker stdout | 2025-02-14 01:51:05,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:51:05,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 51, 5, 804414), True, datetime.datetime(2025, 2, 14, 1, 51, 5, 804414), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:51:05,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:51:05,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:51:05,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:15 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:51:05,899 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:51:05,899 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:52.900596+00:00 (in 47.001216 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:51:05,899 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:05 UTC)" (scheduled at 2025-02-14 01:51:05.898886+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:51:05,900 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."subject_backfilled" = %s) OR ("t1"."subject_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:51:05,908 [71] [DEBUG] [__main__] Manifest subject backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:51:05,908 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:51:05,908 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:05 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:51:07,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:51:07,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:15.803718+00:00 (in 7.996150 seconds) notificationworker stdout | 2025-02-14 01:51:07,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:07 UTC)" (scheduled at 2025-02-14 01:51:07.807092+00:00) notificationworker stdout | 2025-02-14 01:51:07,807 [75] [DEBUG] [workers.queueworker] Running watchdog. notificationworker stdout | 2025-02-14 01:51:07,807 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:07 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:51:12,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:51:12,506 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:45.503718+00:00 (in 32.997552 seconds) namespacegcworker stdout | 2025-02-14 01:51:12,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:12 UTC)" (scheduled at 2025-02-14 01:51:12.505687+00:00) namespacegcworker stdout | 2025-02-14 01:51:12,506 [73] [DEBUG] [workers.queueworker] Running watchdog. namespacegcworker stdout | 2025-02-14 01:51:12,506 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:12 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:51:14,007 [243] [DEBUG] [app] Starting request: urn:request:e27e37bf-5aad-4f51-85b2-fe7a5ec854c8 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:51:14,008 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:14,008 [242] [DEBUG] [app] Starting request: urn:request:a7bbb5be-2275-43a4-920e-5e5f9e9c3c41 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:51:14,010 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:14,010 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:51:14,011 [246] [DEBUG] [app] Starting request: urn:request:44bb6c33-1574-4bd6-a7da-7cc481a07a96 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:51:14,012 [246] [DEBUG] [app] Ending request: urn:request:44bb6c33-1574-4bd6-a7da-7cc481a07a96 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:44bb6c33-1574-4bd6-a7da-7cc481a07a96', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:51:14,012 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:51:14,012 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:51:14,012 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:51:14,013 [246] [DEBUG] [app] Starting request: urn:request:4f49b123-e386-48fc-95e4-177e284bdfe6 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:51:14,013 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-registry stdout | 2025-02-14 01:51:14,013 [246] [DEBUG] [app] Ending request: urn:request:4f49b123-e386-48fc-95e4-177e284bdfe6 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:4f49b123-e386-48fc-95e4-177e284bdfe6', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:51:14,014 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-web stdout | 2025-02-14 01:51:14,014 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:14,015 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:51:14,016 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:14,016 [244] [DEBUG] [app] Starting request: urn:request:71ede706-40f0-41e3-abe4-c0015a81845d (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:51:14,017 [244] [DEBUG] [app] Ending request: urn:request:71ede706-40f0-41e3-abe4-c0015a81845d (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:71ede706-40f0-41e3-abe4-c0015a81845d', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:51:14,017 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:51:14,018 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:14,018 [243] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:51:14,018 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:51:14,018 [243] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:51:14,019 [245] [DEBUG] [app] Starting request: urn:request:0dcad595-2e4b-46fe-bb4f-5552509e080b (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:51:14,019 [245] [DEBUG] [app] Ending request: urn:request:0dcad595-2e4b-46fe-bb4f-5552509e080b (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:0dcad595-2e4b-46fe-bb4f-5552509e080b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:51:14,020 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:51:14,020 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:14,020 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:51:14,020 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:51:14,024 [243] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:51:14,024 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:51:14,026 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:51:14,026 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:51:14,030 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:51:14,032 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:51:14,033 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:51:14,035 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:51:14,035 [243] [DEBUG] [app] Ending request: urn:request:e27e37bf-5aad-4f51-85b2-fe7a5ec854c8 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:e27e37bf-5aad-4f51-85b2-fe7a5ec854c8', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:51:14,036 [243] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:51:14,036 [243] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:51:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:51:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:51:14,037 [242] [DEBUG] [app] Ending request: urn:request:a7bbb5be-2275-43a4-920e-5e5f9e9c3c41 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:a7bbb5be-2275-43a4-920e-5e5f9e9c3c41', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:51:14,037 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:51:14,038 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:51:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:51:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.030) exportactionlogsworker stdout | 2025-02-14 01:51:14,644 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:51:14,768 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} notificationworker stdout | 2025-02-14 01:51:15,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:51:15,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:25.803718+00:00 (in 9.999555 seconds) notificationworker stdout | 2025-02-14 01:51:15,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:25 UTC)" (scheduled at 2025-02-14 01:51:15.803718+00:00) notificationworker stdout | 2025-02-14 01:51:15,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:51:15,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 51, 15, 804372), True, datetime.datetime(2025, 2, 14, 1, 51, 15, 804372), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:51:15,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:51:15,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:51:15,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:25 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:51:16,814 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} namespacegcworker stdout | 2025-02-14 01:51:20,195 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:51:20,544 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:51:22,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:51:22,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:52.310342+00:00 (in 29.999575 seconds) autopruneworker stdout | 2025-02-14 01:51:22,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:52 UTC)" (scheduled at 2025-02-14 01:51:22.310342+00:00) autopruneworker stdout | 2025-02-14 01:51:22,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494282316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:51:22,320 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:51:22,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:51:22,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:52 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:51:22,338 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} buildlogsarchiver stdout | 2025-02-14 01:51:23,220 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:51:24,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:51:24,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:29.232325+00:00 (in 5.000634 seconds) securityworker stdout | 2025-02-14 01:51:24,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:54 UTC)" (scheduled at 2025-02-14 01:51:24.231161+00:00) securityworker stdout | 2025-02-14 01:51:24,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:51:24,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:51:24,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:51:24,284 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:51:24,293 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:51:24,295 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:51:24,295 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:54 UTC)" executed successfully gcworker stdout | 2025-02-14 01:51:24,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:51:24,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:25.392556+00:00 (in 1.001732 seconds) gcworker stdout | 2025-02-14 01:51:24,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:54 UTC)" (scheduled at 2025-02-14 01:51:24.390410+00:00) gcworker stdout | 2025-02-14 01:51:24,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:51:24,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:54 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:51:25,051 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} exportactionlogsworker stdout | 2025-02-14 01:51:25,215 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:51:25,215 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:30.212654+00:00 (in 4.996957 seconds) exportactionlogsworker stdout | 2025-02-14 01:51:25,215 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:25 UTC)" (scheduled at 2025-02-14 01:51:25.215238+00:00) exportactionlogsworker stdout | 2025-02-14 01:51:25,215 [63] [DEBUG] [workers.queueworker] Running watchdog. exportactionlogsworker stdout | 2025-02-14 01:51:25,216 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:25 UTC)" executed successfully gcworker stdout | 2025-02-14 01:51:25,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:51:25,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:54.390410+00:00 (in 28.997430 seconds) gcworker stdout | 2025-02-14 01:51:25,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:55 UTC)" (scheduled at 2025-02-14 01:51:25.392556+00:00) gcworker stdout | 2025-02-14 01:51:25,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:51:25,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497585401, None, 1, 0]) gcworker stdout | 2025-02-14 01:51:25,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:51:25,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:55 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:51:25,431 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:51:25,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:51:25,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:35.803718+00:00 (in 9.999516 seconds) notificationworker stdout | 2025-02-14 01:51:25,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:35 UTC)" (scheduled at 2025-02-14 01:51:25.803718+00:00) notificationworker stdout | 2025-02-14 01:51:25,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:51:25,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 51, 25, 804590), True, datetime.datetime(2025, 2, 14, 1, 51, 25, 804590), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:51:25,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:51:25,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:51:25,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:35 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:51:26,186 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:51:26,678 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:51:27,031 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:51:27,360 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:51:27,695 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:51:27,838 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:51:28,110 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:51:28,372 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:51:28,489 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} repositorygcworker stdout | 2025-02-14 01:51:28,933 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gunicorn-web stdout | 2025-02-14 01:51:29,007 [242] [DEBUG] [app] Starting request: urn:request:8732b925-0c17-4c44-b651-a523d09494fc (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:51:29,008 [244] [DEBUG] [app] Starting request: urn:request:902c204e-8c0f-49ce-81b5-7afeb0adde18 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:51:29,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:29,009 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:29,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:51:29,012 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:51:29,012 [246] [DEBUG] [app] Starting request: urn:request:ddd4b6a1-2bb8-4402-b3fa-3776316e372d (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:51:29,013 [246] [DEBUG] [app] Ending request: urn:request:ddd4b6a1-2bb8-4402-b3fa-3776316e372d (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:ddd4b6a1-2bb8-4402-b3fa-3776316e372d', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:51:29,013 [252] [DEBUG] [app] Starting request: urn:request:80677b6e-9c6e-4802-8f2f-319e25a4649f (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:51:29,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:51:29,013 [252] [DEBUG] [app] Ending request: urn:request:80677b6e-9c6e-4802-8f2f-319e25a4649f (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:80677b6e-9c6e-4802-8f2f-319e25a4649f', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:51:29,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-registry stdout | 2025-02-14 01:51:29,013 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:51:29,014 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:29,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:29,016 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:29,017 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:51:29,018 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:51:29,018 [245] [DEBUG] [app] Starting request: urn:request:34dbdf3a-077d-42f0-8cfb-e07c4cd3cb55 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:51:29,018 [245] [DEBUG] [app] Ending request: urn:request:34dbdf3a-077d-42f0-8cfb-e07c4cd3cb55 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:34dbdf3a-077d-42f0-8cfb-e07c4cd3cb55', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:51:29,019 [243] [DEBUG] [app] Starting request: urn:request:5883aabc-c057-4b5d-a395-6225d95ed6e0 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:51:29,019 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:51:29,019 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:29,019 [243] [DEBUG] [app] Ending request: urn:request:5883aabc-c057-4b5d-a395-6225d95ed6e0 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:5883aabc-c057-4b5d-a395-6225d95ed6e0', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:51:29,019 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:51:29,019 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:51:29,019 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:51:29,019 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:29,020 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:51:29,020 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:51:29,025 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:51:29,025 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:51:29,025 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:51:29,025 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:51:29,032 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:51:29,032 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:51:29,035 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:51:29,035 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:51:29,037 [244] [DEBUG] [app] Ending request: urn:request:902c204e-8c0f-49ce-81b5-7afeb0adde18 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:902c204e-8c0f-49ce-81b5-7afeb0adde18', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:51:29,037 [242] [DEBUG] [app] Ending request: urn:request:8732b925-0c17-4c44-b651-a523d09494fc (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:8732b925-0c17-4c44-b651-a523d09494fc', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:51:29,037 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:51:29,037 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:51:29,038 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:51:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:51:29,038 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:51:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:51:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.032) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:51:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.030) securityworker stdout | 2025-02-14 01:51:29,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:51:29,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:54.231161+00:00 (in 24.998303 seconds) securityworker stdout | 2025-02-14 01:51:29,233 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:59 UTC)" (scheduled at 2025-02-14 01:51:29.232325+00:00) securityworker stdout | 2025-02-14 01:51:29,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:51:29,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:51:29,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:51:29,237 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:51:29,245 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:51:29,245 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:51:29,245 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:51:29,245 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:51:29,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:29,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:29,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:51:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:51:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:51:29,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:51:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:51:29,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:51:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:51:29,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:51:29,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:51:29,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:51:29,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 46, 29, 236833), 1, 2]) securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:29,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:29,252 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:51:29,252 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:51:29,252 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:51:29,252 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:51:29,252 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:51:29,252 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:51:29,252 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:51:29,252 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:51:29,252 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:51:29,252 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:51:29,253 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 46, 29, 236833), 1, 2]) securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:29,255 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:29,255 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:51:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:51:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:51:29,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:51:29,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:51:29,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:51:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:51:29,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:51:29,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:51:59 UTC)" executed successfully gcworker stdout | 2025-02-14 01:51:29,679 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} exportactionlogsworker stdout | 2025-02-14 01:51:30,212 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:51:30,213 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:25.215238+00:00 (in 55.002160 seconds) exportactionlogsworker stdout | 2025-02-14 01:51:30,213 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:30 UTC)" (scheduled at 2025-02-14 01:51:30.212654+00:00) exportactionlogsworker stdout | 2025-02-14 01:51:30,213 [63] [DEBUG] [workers.queueworker] Getting work item from queue. exportactionlogsworker stdout | 2025-02-14 01:51:30,214 [63] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 51, 30, 213404), True, datetime.datetime(2025, 2, 14, 1, 51, 30, 213404), 0, 'exportactionlogs/%', 50, 1, 0]) exportactionlogsworker stdout | 2025-02-14 01:51:30,223 [63] [DEBUG] [workers.queueworker] No more work. exportactionlogsworker stdout | 2025-02-14 01:51:30,223 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:51:30,223 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:30 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:51:31,281 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:51:31,284 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:51:31,287 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:51:31,289 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:51:31,292 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:51:31,410 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:51:32,231 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:51:32,634 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} repositorygcworker stdout | 2025-02-14 01:51:33,011 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:51:33,012 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:36.014770+00:00 (in 3.002667 seconds) repositorygcworker stdout | 2025-02-14 01:51:33,012 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:33 UTC)" (scheduled at 2025-02-14 01:51:33.011632+00:00) repositorygcworker stdout | 2025-02-14 01:51:33,012 [85] [DEBUG] [workers.queueworker] Getting work item from queue. repositorygcworker stdout | 2025-02-14 01:51:33,013 [85] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 51, 33, 12376), True, datetime.datetime(2025, 2, 14, 1, 51, 33, 12376), 0, 'repositorygc/%', 50, 1, 0]) repositorygcworker stdout | 2025-02-14 01:51:33,022 [85] [DEBUG] [workers.queueworker] No more work. repositorygcworker stdout | 2025-02-14 01:51:33,022 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:51:33,022 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:33 UTC)" executed successfully gunicorn-secscan stdout | 2025-02-14 01:51:33,197 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:51:33,200 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:51:33,203 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:51:34,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:51:34,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:04.000511+00:00 (in 29.999572 seconds) buildlogsarchiver stdout | 2025-02-14 01:51:34,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:04 UTC)" (scheduled at 2025-02-14 01:51:34.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:51:34,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 51, 34, 1207), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:51:34,010 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:51:34,010 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:51:34,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:04 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:51:34,483 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:51:34,486 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:51:34,489 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:51:34,494 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:51:34,496 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:51:34,500 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:51:34,502 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:51:34,551 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:51:34,555 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:51:35,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:51:35,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:45.803718+00:00 (in 9.999503 seconds) notificationworker stdout | 2025-02-14 01:51:35,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:45 UTC)" (scheduled at 2025-02-14 01:51:35.803718+00:00) notificationworker stdout | 2025-02-14 01:51:35,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:51:35,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 51, 35, 804426), True, datetime.datetime(2025, 2, 14, 1, 51, 35, 804426), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:51:35,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:51:35,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:51:35,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:45 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:51:36,014 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:51:36,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:33.011632+00:00 (in 56.996436 seconds) repositorygcworker stdout | 2025-02-14 01:51:36,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:36 UTC)" (scheduled at 2025-02-14 01:51:36.014770+00:00) repositorygcworker stdout | 2025-02-14 01:51:36,015 [85] [DEBUG] [workers.queueworker] Running watchdog. repositorygcworker stdout | 2025-02-14 01:51:36,015 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:36 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:51:44,007 [242] [DEBUG] [app] Starting request: urn:request:7e9a0f7b-89d6-4dc3-92fc-ab6e83444794 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:51:44,007 [245] [DEBUG] [app] Starting request: urn:request:ca499c5d-82bb-4e67-b759-02ec7f52f0c2 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:51:44,008 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:44,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:44,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:51:44,011 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:51:44,012 [246] [DEBUG] [app] Starting request: urn:request:dd4578da-1354-420b-a1a2-cab8e6bcaf7f (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:51:44,012 [246] [DEBUG] [app] Ending request: urn:request:dd4578da-1354-420b-a1a2-cab8e6bcaf7f (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:dd4578da-1354-420b-a1a2-cab8e6bcaf7f', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:51:44,012 [253] [DEBUG] [app] Starting request: urn:request:e85d45f7-a874-4121-bfd2-8a427d041d4c (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:51:44,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-registry stdout | 2025-02-14 01:51:44,013 [253] [DEBUG] [app] Ending request: urn:request:e85d45f7-a874-4121-bfd2-8a427d041d4c (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:e85d45f7-a874-4121-bfd2-8a427d041d4c', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:51:44,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:51:44,013 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:51:44,013 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:44,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:44,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:44,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:51:44,017 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:51:44,017 [244] [DEBUG] [app] Starting request: urn:request:af7b73ee-b8f5-47aa-af77-fccecf2ea87b (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:51:44,017 [242] [DEBUG] [app] Starting request: urn:request:eee1da58-3012-4776-b7b2-3a6cc82337d7 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:51:44,017 [244] [DEBUG] [app] Ending request: urn:request:af7b73ee-b8f5-47aa-af77-fccecf2ea87b (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:af7b73ee-b8f5-47aa-af77-fccecf2ea87b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:51:44,017 [242] [DEBUG] [app] Ending request: urn:request:eee1da58-3012-4776-b7b2-3a6cc82337d7 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:eee1da58-3012-4776-b7b2-3a6cc82337d7', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:51:44,018 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:51:44,018 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:51:44,018 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:44,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:44,018 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:51:44,018 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:51:44,018 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:51:44,018 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:51:44,024 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:51:44,024 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:51:44,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:51:44,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:51:44,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:51:44,031 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:51:44,033 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:51:44,033 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:51:44,036 [242] [DEBUG] [app] Ending request: urn:request:7e9a0f7b-89d6-4dc3-92fc-ab6e83444794 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:7e9a0f7b-89d6-4dc3-92fc-ab6e83444794', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:51:44,036 [245] [DEBUG] [app] Ending request: urn:request:ca499c5d-82bb-4e67-b759-02ec7f52f0c2 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:ca499c5d-82bb-4e67-b759-02ec7f52f0c2', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:51:44,036 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:51:44,036 [245] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:51:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:51:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:51:44,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:51:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:51:44,036 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:51:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" exportactionlogsworker stdout | 2025-02-14 01:51:44,680 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:51:44,778 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} namespacegcworker stdout | 2025-02-14 01:51:45,503 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:51:45,504 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:12.505687+00:00 (in 27.001496 seconds) namespacegcworker stdout | 2025-02-14 01:51:45,504 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:45 UTC)" (scheduled at 2025-02-14 01:51:45.503718+00:00) namespacegcworker stdout | 2025-02-14 01:51:45,504 [73] [DEBUG] [workers.queueworker] Getting work item from queue. namespacegcworker stdout | 2025-02-14 01:51:45,505 [73] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 51, 45, 504397), True, datetime.datetime(2025, 2, 14, 1, 51, 45, 504397), 0, 'namespacegc/%', 50, 1, 0]) namespacegcworker stdout | 2025-02-14 01:51:45,514 [73] [DEBUG] [workers.queueworker] No more work. namespacegcworker stdout | 2025-02-14 01:51:45,514 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:51:45,514 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:45 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:51:45,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:51:45,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:55.803718+00:00 (in 9.999548 seconds) notificationworker stdout | 2025-02-14 01:51:45,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:55 UTC)" (scheduled at 2025-02-14 01:51:45.803718+00:00) notificationworker stdout | 2025-02-14 01:51:45,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:51:45,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 51, 45, 804372), True, datetime.datetime(2025, 2, 14, 1, 51, 45, 804372), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:51:45,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:51:45,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:51:45,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:51:55 UTC)" executed successfully quotaregistrysizeworker stdout | 2025-02-14 01:51:46,009 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:51:46,010 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:46.009738+00:00 (in 59.999526 seconds) quotaregistrysizeworker stdout | 2025-02-14 01:51:46,010 [78] [INFO] [apscheduler.executors.default] Running job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:46 UTC)" (scheduled at 2025-02-14 01:51:46.009738+00:00) quotaregistrysizeworker stdout | 2025-02-14 01:51:46,010 [78] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."size_bytes", "t1"."running", "t1"."queued", "t1"."completed_ms" FROM "quotaregistrysize" AS "t1" LIMIT %s OFFSET %s', [1, 0]) quotaregistrysizeworker stdout | 2025-02-14 01:51:46,018 [78] [DEBUG] [data.database] Disconnecting from database. quotaregistrysizeworker stdout | 2025-02-14 01:51:46,018 [78] [INFO] [apscheduler.executors.default] Job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:46 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:51:46,830 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:51:48,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:51:48,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:59.123196+00:00 (in 10.997593 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:51:48,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:48 UTC)" (scheduled at 2025-02-14 01:51:48.125163+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:51:48,125 [87] [DEBUG] [workers.queueworker] Running watchdog. securityscanningnotificationworker stdout | 2025-02-14 01:51:48,125 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:48 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:51:50,231 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:51:50,580 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:51:52,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:51:52,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:22.310342+00:00 (in 29.999567 seconds) autopruneworker stdout | 2025-02-14 01:51:52,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:22 UTC)" (scheduled at 2025-02-14 01:51:52.310342+00:00) autopruneworker stdout | 2025-02-14 01:51:52,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494312316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:51:52,320 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:51:52,320 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:51:52,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:22 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:51:52,374 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} manifestsubjectbackfillworker stdout | 2025-02-14 01:51:52,900 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:51:52,901 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:05.898886+00:00 (in 12.997773 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:51:52,901 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:52 UTC)" (scheduled at 2025-02-14 01:51:52.900596+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:51:52,901 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."artifact_type_backfilled" = %s) OR ("t1"."artifact_type_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:51:52,910 [71] [DEBUG] [__main__] Manifest artifact_type backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:51:52,910 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:51:52,910 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:51:53,230 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:51:54,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:51:54,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:59.232325+00:00 (in 5.000665 seconds) securityworker stdout | 2025-02-14 01:51:54,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:24 UTC)" (scheduled at 2025-02-14 01:51:54.231161+00:00) securityworker stdout | 2025-02-14 01:51:54,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:51:54,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:51:54,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:51:54,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:51:54,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:51:54,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:51:54,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:51:54,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:51:54,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:51:55.392556+00:00 (in 1.001710 seconds) gcworker stdout | 2025-02-14 01:51:54,391 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:24 UTC)" (scheduled at 2025-02-14 01:51:54.390410+00:00) gcworker stdout | 2025-02-14 01:51:54,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:51:54,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:24 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:51:55,087 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} gcworker stdout | 2025-02-14 01:51:55,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:51:55,393 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:24.390410+00:00 (in 28.997389 seconds) gcworker stdout | 2025-02-14 01:51:55,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:25 UTC)" (scheduled at 2025-02-14 01:51:55.392556+00:00) gcworker stdout | 2025-02-14 01:51:55,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:51:55,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497615401, None, 1, 0]) gcworker stdout | 2025-02-14 01:51:55,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:51:55,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:51:55,468 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:51:55,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:51:55,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:05.803718+00:00 (in 9.999538 seconds) notificationworker stdout | 2025-02-14 01:51:55,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:05 UTC)" (scheduled at 2025-02-14 01:51:55.803718+00:00) notificationworker stdout | 2025-02-14 01:51:55,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:51:55,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 51, 55, 804486), True, datetime.datetime(2025, 2, 14, 1, 51, 55, 804486), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:51:55,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:51:55,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:51:55,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:05 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:51:56,222 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:51:56,715 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:51:57,051 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:51:57,395 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:51:57,715 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:51:57,875 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:51:58,146 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:51:58,395 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:51:58,498 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} repositorygcworker stdout | 2025-02-14 01:51:58,970 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gunicorn-web stdout | 2025-02-14 01:51:59,007 [242] [DEBUG] [app] Starting request: urn:request:14a6a62c-f142-4422-952b-f02fb25f3e9e (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:51:59,007 [245] [DEBUG] [app] Starting request: urn:request:51428f43-2c06-4723-9117-5150347e8752 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:51:59,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:59,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:59,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:51:59,011 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:51:59,012 [253] [DEBUG] [app] Starting request: urn:request:d67e8f0b-3005-477b-9e5d-c735641c2790 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:51:59,012 [253] [DEBUG] [app] Ending request: urn:request:d67e8f0b-3005-477b-9e5d-c735641c2790 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:d67e8f0b-3005-477b-9e5d-c735641c2790', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:51:59,013 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:51:59,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:51:59,014 [249] [DEBUG] [app] Starting request: urn:request:dc14aae7-9573-4838-986b-f08517a6a044 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:51:59,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-registry stdout | 2025-02-14 01:51:59,014 [249] [DEBUG] [app] Ending request: urn:request:dc14aae7-9573-4838-986b-f08517a6a044 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:dc14aae7-9573-4838-986b-f08517a6a044', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:51:59,015 [249] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.004 162 0.004) gunicorn-web stdout | 2025-02-14 01:51:59,016 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:59,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:51:59,017 [244] [DEBUG] [app] Starting request: urn:request:11e8f360-c470-486a-842c-da156950fc50 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:51:59,017 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:51:59,017 [244] [DEBUG] [app] Ending request: urn:request:11e8f360-c470-486a-842c-da156950fc50 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:11e8f360-c470-486a-842c-da156950fc50', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:51:59,018 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:51:59,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:59,018 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:51:59,018 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:51:59,019 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:51:59,020 [244] [DEBUG] [app] Starting request: urn:request:103ec076-135a-42de-99c9-a115e3f16316 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:51:59,020 [244] [DEBUG] [app] Ending request: urn:request:103ec076-135a-42de-99c9-a115e3f16316 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:103ec076-135a-42de-99c9-a115e3f16316', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:51:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:51:59,020 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:51:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:51:59,020 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:51:59,021 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:51:59,021 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:51:59,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:51:59,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:51:59,026 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:51:59,026 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:51:59,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:51:59,033 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:51:59,034 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:51:59,035 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:51:59,036 [242] [DEBUG] [app] Ending request: urn:request:14a6a62c-f142-4422-952b-f02fb25f3e9e (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:14a6a62c-f142-4422-952b-f02fb25f3e9e', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:51:59,036 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:51:59,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:51:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:51:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:51:59,038 [245] [DEBUG] [app] Ending request: urn:request:51428f43-2c06-4723-9117-5150347e8752 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:51428f43-2c06-4723-9117-5150347e8752', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:51:59,038 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:51:59,038 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:51:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:51:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.032) securityscanningnotificationworker stdout | 2025-02-14 01:51:59,123 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:51:59,123 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:48.125163+00:00 (in 49.001547 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:51:59,123 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:59 UTC)" (scheduled at 2025-02-14 01:51:59.123196+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:51:59,123 [87] [DEBUG] [workers.queueworker] Getting work item from queue. securityscanningnotificationworker stdout | 2025-02-14 01:51:59,124 [87] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 51, 59, 123925), True, datetime.datetime(2025, 2, 14, 1, 51, 59, 123925), 0, 'secscanv4/%', 50, 1, 0]) securityscanningnotificationworker stdout | 2025-02-14 01:51:59,133 [87] [DEBUG] [workers.queueworker] No more work. securityscanningnotificationworker stdout | 2025-02-14 01:51:59,133 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:51:59,134 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:52:59 UTC)" executed successfully securityworker stdout | 2025-02-14 01:51:59,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:51:59,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:24.231161+00:00 (in 24.998373 seconds) securityworker stdout | 2025-02-14 01:51:59,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:29 UTC)" (scheduled at 2025-02-14 01:51:59.232325+00:00) securityworker stdout | 2025-02-14 01:51:59,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:51:59,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:51:59,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:51:59,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:51:59,245 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:51:59,245 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:51:59,245 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:51:59,245 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:51:59,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:59,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:59,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:51:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:51:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:51:59,249 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:51:59,249 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:51:59,249 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:51:59,249 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:51:59,249 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:51:59,249 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:51:59,249 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:51:59,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 46, 59, 236679), 1, 2]) securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:59,252 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:59,252 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:51:59,252 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:51:59,252 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:51:59,252 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:51:59,252 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:51:59,252 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:51:59,252 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:51:59,252 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:51:59,252 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:51:59,252 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:51:59,253 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 46, 59, 236679), 1, 2]) securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:59,255 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:51:59,255 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:51:59,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:51:59,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:51:59,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:51:59,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:51:59,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:51:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:51:59,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:51:59,256 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:29 UTC)" executed successfully gcworker stdout | 2025-02-14 01:51:59,695 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} gunicorn-web stdout | 2025-02-14 01:52:01,289 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:52:01,292 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:52:01,295 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:52:01,298 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:52:01,300 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:52:01,431 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:52:02,267 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:52:02,659 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} gunicorn-secscan stdout | 2025-02-14 01:52:03,204 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:52:03,207 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:52:03,209 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:52:04,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:52:04,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:34.000511+00:00 (in 29.999550 seconds) buildlogsarchiver stdout | 2025-02-14 01:52:04,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:34 UTC)" (scheduled at 2025-02-14 01:52:04.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:52:04,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 52, 4, 1229), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:52:04,010 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:52:04,010 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:52:04,010 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:34 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:52:04,493 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:52:04,496 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:52:04,501 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:52:04,505 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:52:04,508 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:52:04,512 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:52:04,515 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:52:04,561 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:52:04,563 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:52:05,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:52:05,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:07.807092+00:00 (in 2.002909 seconds) notificationworker stdout | 2025-02-14 01:52:05,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:15 UTC)" (scheduled at 2025-02-14 01:52:05.803718+00:00) notificationworker stdout | 2025-02-14 01:52:05,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:52:05,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 52, 5, 804481), True, datetime.datetime(2025, 2, 14, 1, 52, 5, 804481), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:52:05,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:52:05,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:52:05,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:15 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:52:05,899 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:52:05,899 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:52.900596+00:00 (in 47.001261 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:52:05,899 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:05 UTC)" (scheduled at 2025-02-14 01:52:05.898886+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:52:05,900 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."subject_backfilled" = %s) OR ("t1"."subject_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:52:05,908 [71] [DEBUG] [__main__] Manifest subject backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:52:05,908 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:52:05,908 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:05 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:52:07,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:52:07,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:15.803718+00:00 (in 7.996176 seconds) notificationworker stdout | 2025-02-14 01:52:07,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:07 UTC)" (scheduled at 2025-02-14 01:52:07.807092+00:00) notificationworker stdout | 2025-02-14 01:52:07,807 [75] [DEBUG] [workers.queueworker] Running watchdog. notificationworker stdout | 2025-02-14 01:52:07,807 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:07 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:52:12,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:52:12,506 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:45.503718+00:00 (in 32.997578 seconds) namespacegcworker stdout | 2025-02-14 01:52:12,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:12 UTC)" (scheduled at 2025-02-14 01:52:12.505687+00:00) namespacegcworker stdout | 2025-02-14 01:52:12,506 [73] [DEBUG] [workers.queueworker] Running watchdog. namespacegcworker stdout | 2025-02-14 01:52:12,506 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:12 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:52:14,007 [244] [DEBUG] [app] Starting request: urn:request:68f68526-b9ea-42d5-a975-29588d5e16b5 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:52:14,008 [245] [DEBUG] [app] Starting request: urn:request:1b31fb3c-a3bc-47f0-85b0-34fe9aa10b4b (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:52:14,008 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:14,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:14,011 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:52:14,012 [246] [DEBUG] [app] Starting request: urn:request:7a3e60e2-ef8f-4f55-a193-d9e6c6ff665f (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:52:14,012 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:52:14,012 [246] [DEBUG] [app] Ending request: urn:request:7a3e60e2-ef8f-4f55-a193-d9e6c6ff665f (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:7a3e60e2-ef8f-4f55-a193-d9e6c6ff665f', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:52:14,012 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:52:14,012 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:52:14,013 [253] [DEBUG] [app] Starting request: urn:request:ceabf7e9-1255-41fb-85c4-35ff1e8c9283 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:52:14,013 [253] [DEBUG] [app] Ending request: urn:request:ceabf7e9-1255-41fb-85c4-35ff1e8c9283 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:ceabf7e9-1255-41fb-85c4-35ff1e8c9283', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:52:14,014 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-web stdout | 2025-02-14 01:52:14,014 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:14,014 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:52:14,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:14,016 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:52:14,017 [245] [DEBUG] [app] Starting request: urn:request:bf9f9d87-c1b6-49ca-86f8-7cdaefbdc68b (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:52:14,017 [245] [DEBUG] [app] Ending request: urn:request:bf9f9d87-c1b6-49ca-86f8-7cdaefbdc68b (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:bf9f9d87-c1b6-49ca-86f8-7cdaefbdc68b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:52:14,017 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:52:14,017 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:52:14,018 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:52:14,018 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:52:14,018 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:52:14,019 [243] [DEBUG] [app] Starting request: urn:request:c6f4af61-b2cb-4b41-9a05-bde571799135 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:52:14,019 [243] [DEBUG] [app] Ending request: urn:request:c6f4af61-b2cb-4b41-9a05-bde571799135 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:c6f4af61-b2cb-4b41-9a05-bde571799135', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:52:14,019 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:52:14,020 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:52:14,020 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:52:14,020 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:52:14,023 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:52:14,024 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:52:14,025 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:52:14,025 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:52:14,030 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:52:14,032 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:52:14,033 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:52:14,034 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:52:14,035 [244] [DEBUG] [app] Ending request: urn:request:68f68526-b9ea-42d5-a975-29588d5e16b5 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:68f68526-b9ea-42d5-a975-29588d5e16b5', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:52:14,036 [244] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:52:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:52:14,036 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:52:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:52:14,037 [245] [DEBUG] [app] Ending request: urn:request:1b31fb3c-a3bc-47f0-85b0-34fe9aa10b4b (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:1b31fb3c-a3bc-47f0-85b0-34fe9aa10b4b', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:52:14,037 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:52:14,037 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:52:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:52:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) exportactionlogsworker stdout | 2025-02-14 01:52:14,688 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:52:14,789 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} notificationworker stdout | 2025-02-14 01:52:15,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:52:15,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:25.803718+00:00 (in 9.999539 seconds) notificationworker stdout | 2025-02-14 01:52:15,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:25 UTC)" (scheduled at 2025-02-14 01:52:15.803718+00:00) notificationworker stdout | 2025-02-14 01:52:15,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:52:15,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 52, 15, 804393), True, datetime.datetime(2025, 2, 14, 1, 52, 15, 804393), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:52:15,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:52:15,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:52:15,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:25 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:52:16,845 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} namespacegcworker stdout | 2025-02-14 01:52:20,269 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:52:20,609 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:52:22,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:52:22,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:52.310342+00:00 (in 29.999579 seconds) autopruneworker stdout | 2025-02-14 01:52:22,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:52 UTC)" (scheduled at 2025-02-14 01:52:22.310342+00:00) autopruneworker stdout | 2025-02-14 01:52:22,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494342316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:52:22,320 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:52:22,320 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:52:22,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:52 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:52:22,410 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} buildlogsarchiver stdout | 2025-02-14 01:52:23,266 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:52:24,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:52:24,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:29.232325+00:00 (in 5.000705 seconds) securityworker stdout | 2025-02-14 01:52:24,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:54 UTC)" (scheduled at 2025-02-14 01:52:24.231161+00:00) securityworker stdout | 2025-02-14 01:52:24,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:52:24,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:52:24,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:52:24,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:52:24,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:52:24,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:52:24,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:54 UTC)" executed successfully gcworker stdout | 2025-02-14 01:52:24,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:52:24,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:25.392556+00:00 (in 1.001720 seconds) gcworker stdout | 2025-02-14 01:52:24,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:54 UTC)" (scheduled at 2025-02-14 01:52:24.390410+00:00) gcworker stdout | 2025-02-14 01:52:24,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:52:24,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:54 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:52:25,115 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} exportactionlogsworker stdout | 2025-02-14 01:52:25,215 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:52:25,215 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:30.212654+00:00 (in 4.996893 seconds) exportactionlogsworker stdout | 2025-02-14 01:52:25,215 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:25 UTC)" (scheduled at 2025-02-14 01:52:25.215238+00:00) exportactionlogsworker stdout | 2025-02-14 01:52:25,216 [63] [DEBUG] [workers.queueworker] Running watchdog. exportactionlogsworker stdout | 2025-02-14 01:52:25,216 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:25 UTC)" executed successfully gcworker stdout | 2025-02-14 01:52:25,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:52:25,393 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:54.390410+00:00 (in 28.997419 seconds) gcworker stdout | 2025-02-14 01:52:25,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:55 UTC)" (scheduled at 2025-02-14 01:52:25.392556+00:00) gcworker stdout | 2025-02-14 01:52:25,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:52:25,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497645401, None, 1, 0]) gcworker stdout | 2025-02-14 01:52:25,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:52:25,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:55 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:52:25,500 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:52:25,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:52:25,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:35.803718+00:00 (in 9.999550 seconds) notificationworker stdout | 2025-02-14 01:52:25,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:35 UTC)" (scheduled at 2025-02-14 01:52:25.803718+00:00) notificationworker stdout | 2025-02-14 01:52:25,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:52:25,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 52, 25, 804466), True, datetime.datetime(2025, 2, 14, 1, 52, 25, 804466), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:52:25,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:52:25,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:52:25,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:35 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:52:26,259 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:52:26,738 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:52:27,087 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:52:27,431 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:52:27,751 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:52:27,912 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:52:28,182 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:52:28,425 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:52:28,529 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} repositorygcworker stdout | 2025-02-14 01:52:29,003 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gunicorn-web stdout | 2025-02-14 01:52:29,007 [242] [DEBUG] [app] Starting request: urn:request:2bb75371-797d-45e4-a276-6970a498a05b (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:52:29,009 [243] [DEBUG] [app] Starting request: urn:request:f1911141-df69-4350-8d57-1f744fd72568 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:52:29,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:29,010 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:29,012 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:52:29,013 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:52:29,013 [246] [DEBUG] [app] Starting request: urn:request:dbafd907-ca7f-41c3-ac07-b2da478dc958 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:52:29,014 [246] [DEBUG] [app] Ending request: urn:request:dbafd907-ca7f-41c3-ac07-b2da478dc958 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:dbafd907-ca7f-41c3-ac07-b2da478dc958', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:52:29,014 [252] [DEBUG] [app] Starting request: urn:request:dd7a7bf1-ebd3-494f-9460-942bcf1a8404 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:52:29,014 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:52:29,014 [252] [DEBUG] [app] Ending request: urn:request:dd7a7bf1-ebd3-494f-9460-942bcf1a8404 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:dd7a7bf1-ebd3-494f-9460-942bcf1a8404', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:52:29,014 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:52:29,015 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:52:29,015 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:52:29,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:29,016 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:29,018 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:52:29,018 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:52:29,019 [245] [DEBUG] [app] Starting request: urn:request:4504ad79-ebdb-4777-868b-4da3d4fcc4fe (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:52:29,019 [242] [DEBUG] [app] Starting request: urn:request:8fd70e42-bb8a-45a2-9a06-9eb91931c73e (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:52:29,019 [245] [DEBUG] [app] Ending request: urn:request:4504ad79-ebdb-4777-868b-4da3d4fcc4fe (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:4504ad79-ebdb-4777-868b-4da3d4fcc4fe', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:52:29,019 [242] [DEBUG] [app] Ending request: urn:request:8fd70e42-bb8a-45a2-9a06-9eb91931c73e (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:8fd70e42-bb8a-45a2-9a06-9eb91931c73e', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:52:29,019 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:52:29,020 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:52:29,020 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:52:29,020 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:52:29,020 [243] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:52:29,020 [243] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:52:29,020 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:52:29,020 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:52:29,026 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:52:29,026 [243] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:52:29,026 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:52:29,026 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:52:29,033 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:52:29,033 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:52:29,035 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:52:29,035 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:52:29,038 [242] [DEBUG] [app] Ending request: urn:request:2bb75371-797d-45e4-a276-6970a498a05b (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:2bb75371-797d-45e4-a276-6970a498a05b', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:52:29,038 [243] [DEBUG] [app] Ending request: urn:request:f1911141-df69-4350-8d57-1f744fd72568 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:f1911141-df69-4350-8d57-1f744fd72568', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:52:29,038 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:52:29,038 [243] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:52:29,038 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:52:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:52:29,038 [243] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:52:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:52:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.032) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:52:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.031) securityworker stdout | 2025-02-14 01:52:29,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:52:29,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:54.231161+00:00 (in 24.998358 seconds) securityworker stdout | 2025-02-14 01:52:29,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:59 UTC)" (scheduled at 2025-02-14 01:52:29.232325+00:00) securityworker stdout | 2025-02-14 01:52:29,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:52:29,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:52:29,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:52:29,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:52:29,245 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:52:29,245 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:52:29,245 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:52:29,245 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:52:29,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:29,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:29,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:52:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:52:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:52:29,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:52:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:52:29,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:52:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:52:29,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:52:29,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:52:29,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:52:29,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 47, 29, 236614), 1, 2]) securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:29,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:29,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:52:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:52:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:52:29,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:52:29,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:52:29,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:52:29,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:52:29,252 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:52:29,252 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:52:29,252 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:52:29,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 47, 29, 236614), 1, 2]) securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:29,255 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:29,255 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:52:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:52:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:52:29,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:52:29,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:52:29,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:52:29,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:52:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:52:29,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:52:59 UTC)" executed successfully gcworker stdout | 2025-02-14 01:52:29,731 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} exportactionlogsworker stdout | 2025-02-14 01:52:30,212 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:52:30,213 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:25.215238+00:00 (in 55.002144 seconds) exportactionlogsworker stdout | 2025-02-14 01:52:30,213 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:30 UTC)" (scheduled at 2025-02-14 01:52:30.212654+00:00) exportactionlogsworker stdout | 2025-02-14 01:52:30,213 [63] [DEBUG] [workers.queueworker] Getting work item from queue. exportactionlogsworker stdout | 2025-02-14 01:52:30,215 [63] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 52, 30, 213379), True, datetime.datetime(2025, 2, 14, 1, 52, 30, 213379), 0, 'exportactionlogs/%', 50, 1, 0]) exportactionlogsworker stdout | 2025-02-14 01:52:30,223 [63] [DEBUG] [workers.queueworker] No more work. exportactionlogsworker stdout | 2025-02-14 01:52:30,223 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:52:30,224 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:30 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:52:31,298 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:52:31,300 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:52:31,304 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:52:31,307 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:52:31,309 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:52:31,458 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:52:32,303 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:52:32,688 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} repositorygcworker stdout | 2025-02-14 01:52:33,011 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:52:33,012 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:36.014770+00:00 (in 3.002662 seconds) repositorygcworker stdout | 2025-02-14 01:52:33,012 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:33 UTC)" (scheduled at 2025-02-14 01:52:33.011632+00:00) repositorygcworker stdout | 2025-02-14 01:52:33,012 [85] [DEBUG] [workers.queueworker] Getting work item from queue. repositorygcworker stdout | 2025-02-14 01:52:33,013 [85] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 52, 33, 12427), True, datetime.datetime(2025, 2, 14, 1, 52, 33, 12427), 0, 'repositorygc/%', 50, 1, 0]) repositorygcworker stdout | 2025-02-14 01:52:33,022 [85] [DEBUG] [workers.queueworker] No more work. repositorygcworker stdout | 2025-02-14 01:52:33,022 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:52:33,022 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:33 UTC)" executed successfully gunicorn-secscan stdout | 2025-02-14 01:52:33,213 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:52:33,216 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:52:33,219 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:52:34,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:52:34,001 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:04.000511+00:00 (in 29.999522 seconds) buildlogsarchiver stdout | 2025-02-14 01:52:34,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:04 UTC)" (scheduled at 2025-02-14 01:52:34.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:52:34,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 52, 34, 1281), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:52:34,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:52:34,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:52:34,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:04 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:52:34,500 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:52:34,507 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:52:34,511 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:52:34,514 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:52:34,518 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:52:34,521 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:52:34,525 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:52:34,566 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:52:34,574 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:52:35,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:52:35,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:45.803718+00:00 (in 9.999565 seconds) notificationworker stdout | 2025-02-14 01:52:35,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:45 UTC)" (scheduled at 2025-02-14 01:52:35.803718+00:00) notificationworker stdout | 2025-02-14 01:52:35,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:52:35,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 52, 35, 804431), True, datetime.datetime(2025, 2, 14, 1, 52, 35, 804431), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:52:35,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:52:35,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:52:35,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:45 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:52:36,015 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:52:36,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:33.011632+00:00 (in 56.996331 seconds) repositorygcworker stdout | 2025-02-14 01:52:36,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:36 UTC)" (scheduled at 2025-02-14 01:52:36.014770+00:00) repositorygcworker stdout | 2025-02-14 01:52:36,015 [85] [DEBUG] [workers.queueworker] Running watchdog. repositorygcworker stdout | 2025-02-14 01:52:36,015 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:36 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:52:44,007 [244] [DEBUG] [app] Starting request: urn:request:c4048e3a-87e7-4625-892c-6dd41fb72600 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:52:44,008 [245] [DEBUG] [app] Starting request: urn:request:fda4de52-5aea-4a8f-a965-721351c72abe (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:52:44,008 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:44,010 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:44,011 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:52:44,012 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:52:44,012 [246] [DEBUG] [app] Starting request: urn:request:cae6162a-157a-43c7-b189-841a8679ecdc (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:52:44,013 [251] [DEBUG] [app] Starting request: urn:request:511f3f85-041d-48ba-9a15-25890d9d990a (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:52:44,013 [246] [DEBUG] [app] Ending request: urn:request:cae6162a-157a-43c7-b189-841a8679ecdc (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:cae6162a-157a-43c7-b189-841a8679ecdc', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:52:44,013 [251] [DEBUG] [app] Ending request: urn:request:511f3f85-041d-48ba-9a15-25890d9d990a (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:511f3f85-041d-48ba-9a15-25890d9d990a', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:52:44,013 [251] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:52:44,014 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:52:44,013 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:52:44,014 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:52:44,015 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:44,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:44,017 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:52:44,018 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:52:44,018 [243] [DEBUG] [app] Starting request: urn:request:aa375a7d-e210-4740-a9e9-3dbc61cad624 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:52:44,019 [243] [DEBUG] [app] Ending request: urn:request:aa375a7d-e210-4740-a9e9-3dbc61cad624 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:aa375a7d-e210-4740-a9e9-3dbc61cad624', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:52:44,019 [244] [DEBUG] [app] Starting request: urn:request:c4627d85-a448-41fd-b085-5f19db402597 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:52:44,019 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:52:44,019 [244] [DEBUG] [app] Ending request: urn:request:c4627d85-a448-41fd-b085-5f19db402597 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:c4627d85-a448-41fd-b085-5f19db402597', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:52:44,019 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:52:44,020 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:52:44,020 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:52:44,020 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:52:44,020 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:52:44,020 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:52:44,020 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:52:44,026 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:52:44,026 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:52:44,026 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:52:44,026 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:52:44,032 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:52:44,033 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:52:44,035 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:52:44,035 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:52:44,037 [244] [DEBUG] [app] Ending request: urn:request:c4048e3a-87e7-4625-892c-6dd41fb72600 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:c4048e3a-87e7-4625-892c-6dd41fb72600', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:52:44,037 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:52:44,038 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:52:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:52:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.031) gunicorn-web stdout | 2025-02-14 01:52:44,038 [245] [DEBUG] [app] Ending request: urn:request:fda4de52-5aea-4a8f-a965-721351c72abe (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:fda4de52-5aea-4a8f-a965-721351c72abe', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:52:44,038 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:52:44,038 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:52:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:52:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.031) exportactionlogsworker stdout | 2025-02-14 01:52:44,712 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:52:44,825 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} namespacegcworker stdout | 2025-02-14 01:52:45,503 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:52:45,504 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:12.505687+00:00 (in 27.001508 seconds) namespacegcworker stdout | 2025-02-14 01:52:45,504 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:45 UTC)" (scheduled at 2025-02-14 01:52:45.503718+00:00) namespacegcworker stdout | 2025-02-14 01:52:45,504 [73] [DEBUG] [workers.queueworker] Getting work item from queue. namespacegcworker stdout | 2025-02-14 01:52:45,505 [73] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 52, 45, 504381), True, datetime.datetime(2025, 2, 14, 1, 52, 45, 504381), 0, 'namespacegc/%', 50, 1, 0]) namespacegcworker stdout | 2025-02-14 01:52:45,514 [73] [DEBUG] [workers.queueworker] No more work. namespacegcworker stdout | 2025-02-14 01:52:45,514 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:52:45,514 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:45 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:52:45,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:52:45,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:55.803718+00:00 (in 9.999571 seconds) notificationworker stdout | 2025-02-14 01:52:45,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:55 UTC)" (scheduled at 2025-02-14 01:52:45.803718+00:00) notificationworker stdout | 2025-02-14 01:52:45,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:52:45,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 52, 45, 804358), True, datetime.datetime(2025, 2, 14, 1, 52, 45, 804358), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:52:45,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:52:45,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:52:45,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:52:55 UTC)" executed successfully quotaregistrysizeworker stdout | 2025-02-14 01:52:46,009 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:52:46,010 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:46.009738+00:00 (in 59.999562 seconds) quotaregistrysizeworker stdout | 2025-02-14 01:52:46,010 [78] [INFO] [apscheduler.executors.default] Running job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:46 UTC)" (scheduled at 2025-02-14 01:52:46.009738+00:00) quotaregistrysizeworker stdout | 2025-02-14 01:52:46,010 [78] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."size_bytes", "t1"."running", "t1"."queued", "t1"."completed_ms" FROM "quotaregistrysize" AS "t1" LIMIT %s OFFSET %s', [1, 0]) quotaregistrysizeworker stdout | 2025-02-14 01:52:46,018 [78] [DEBUG] [data.database] Disconnecting from database. quotaregistrysizeworker stdout | 2025-02-14 01:52:46,018 [78] [INFO] [apscheduler.executors.default] Job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:46 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:52:46,858 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:52:48,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:52:48,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:59.123196+00:00 (in 10.997583 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:52:48,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:48 UTC)" (scheduled at 2025-02-14 01:52:48.125163+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:52:48,125 [87] [DEBUG] [workers.queueworker] Running watchdog. securityscanningnotificationworker stdout | 2025-02-14 01:52:48,125 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:48 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:52:50,290 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:52:50,645 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:52:52,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:52:52,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:22.310342+00:00 (in 29.999563 seconds) autopruneworker stdout | 2025-02-14 01:52:52,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:22 UTC)" (scheduled at 2025-02-14 01:52:52.310342+00:00) autopruneworker stdout | 2025-02-14 01:52:52,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494372316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:52:52,321 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:52:52,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:52:52,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:22 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:52:52,425 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} manifestsubjectbackfillworker stdout | 2025-02-14 01:52:52,900 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:52:52,901 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:05.898886+00:00 (in 12.997834 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:52:52,901 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:52 UTC)" (scheduled at 2025-02-14 01:52:52.900596+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:52:52,901 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."artifact_type_backfilled" = %s) OR ("t1"."artifact_type_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:52:52,910 [71] [DEBUG] [__main__] Manifest artifact_type backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:52:52,910 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:52:52,910 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:52:53,303 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:52:54,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:52:54,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:59.232325+00:00 (in 5.000683 seconds) securityworker stdout | 2025-02-14 01:52:54,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:24 UTC)" (scheduled at 2025-02-14 01:52:54.231161+00:00) securityworker stdout | 2025-02-14 01:52:54,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:52:54,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:52:54,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:52:54,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:52:54,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:52:54,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:52:54,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:52:54,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:52:54,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:52:55.392556+00:00 (in 1.001726 seconds) gcworker stdout | 2025-02-14 01:52:54,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:24 UTC)" (scheduled at 2025-02-14 01:52:54.390410+00:00) gcworker stdout | 2025-02-14 01:52:54,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:52:54,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:24 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:52:55,127 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} gcworker stdout | 2025-02-14 01:52:55,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:52:55,393 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:24.390410+00:00 (in 28.997415 seconds) gcworker stdout | 2025-02-14 01:52:55,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:25 UTC)" (scheduled at 2025-02-14 01:52:55.392556+00:00) gcworker stdout | 2025-02-14 01:52:55,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:52:55,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497675401, None, 1, 0]) gcworker stdout | 2025-02-14 01:52:55,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:52:55,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:52:55,536 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:52:55,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:52:55,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:05.803718+00:00 (in 9.999568 seconds) notificationworker stdout | 2025-02-14 01:52:55,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:05 UTC)" (scheduled at 2025-02-14 01:52:55.803718+00:00) notificationworker stdout | 2025-02-14 01:52:55,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:52:55,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 52, 55, 804417), True, datetime.datetime(2025, 2, 14, 1, 52, 55, 804417), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:52:55,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:52:55,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:52:55,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:05 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:52:56,295 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:52:56,770 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:52:57,123 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:52:57,458 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:52:57,787 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:52:57,947 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:52:58,218 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:52:58,461 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:52:58,566 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} gunicorn-web stdout | 2025-02-14 01:52:59,007 [245] [DEBUG] [app] Starting request: urn:request:167751b6-ec98-4310-bc0e-6da34cecb8d7 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:52:59,008 [242] [DEBUG] [app] Starting request: urn:request:cd38ecc8-1c1e-4fad-9565-6981e574e278 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:52:59,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:59,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:59,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:52:59,012 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:52:59,012 [246] [DEBUG] [app] Starting request: urn:request:a6e9c222-c71f-49b5-af5b-f18549ab4dd9 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:52:59,013 [246] [DEBUG] [app] Ending request: urn:request:a6e9c222-c71f-49b5-af5b-f18549ab4dd9 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:a6e9c222-c71f-49b5-af5b-f18549ab4dd9', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:52:59,013 [253] [DEBUG] [app] Starting request: urn:request:b9299e64-5bc5-46da-9e64-5273e7b5073c (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:52:59,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-registry stdout | 2025-02-14 01:52:59,014 [253] [DEBUG] [app] Ending request: urn:request:b9299e64-5bc5-46da-9e64-5273e7b5073c (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:b9299e64-5bc5-46da-9e64-5273e7b5073c', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:52:59,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:52:59,014 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-web stdout | 2025-02-14 01:52:59,014 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:52:59,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:59,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:52:59,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:52:59,017 [245] [DEBUG] [app] Starting request: urn:request:22fbfd10-bf10-4289-9698-ec7e09b230c8 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:52:59,017 [245] [DEBUG] [app] Ending request: urn:request:22fbfd10-bf10-4289-9698-ec7e09b230c8 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:22fbfd10-bf10-4289-9698-ec7e09b230c8', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:52:59,018 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:52:59,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:52:59,018 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:52:59,018 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:52:59,018 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:52:59,019 [243] [DEBUG] [app] Starting request: urn:request:6708455e-731e-4124-aec6-8c3fe0bfaedc (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:52:59,020 [243] [DEBUG] [app] Ending request: urn:request:6708455e-731e-4124-aec6-8c3fe0bfaedc (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:6708455e-731e-4124-aec6-8c3fe0bfaedc', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:52:59,020 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:52:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:52:59,020 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:52:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.002) gunicorn-web stdout | 2025-02-14 01:52:59,020 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:52:59,021 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:52:59,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:52:59,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:52:59,026 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:52:59,026 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:52:59,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:52:59,032 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:52:59,033 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:52:59,035 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:52:59,036 [242] [DEBUG] [app] Ending request: urn:request:cd38ecc8-1c1e-4fad-9565-6981e574e278 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:cd38ecc8-1c1e-4fad-9565-6981e574e278', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:52:59,036 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:52:59,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:52:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:52:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.029 47 0.030) gunicorn-web stdout | 2025-02-14 01:52:59,037 [245] [DEBUG] [app] Ending request: urn:request:167751b6-ec98-4310-bc0e-6da34cecb8d7 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:167751b6-ec98-4310-bc0e-6da34cecb8d7', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:52:59,037 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:52:59,037 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:52:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:52:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.031) repositorygcworker stdout | 2025-02-14 01:52:59,039 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} securityscanningnotificationworker stdout | 2025-02-14 01:52:59,123 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:52:59,123 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:48.125163+00:00 (in 49.001548 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:52:59,123 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:59 UTC)" (scheduled at 2025-02-14 01:52:59.123196+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:52:59,123 [87] [DEBUG] [workers.queueworker] Getting work item from queue. securityscanningnotificationworker stdout | 2025-02-14 01:52:59,124 [87] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 52, 59, 123867), True, datetime.datetime(2025, 2, 14, 1, 52, 59, 123867), 0, 'secscanv4/%', 50, 1, 0]) securityscanningnotificationworker stdout | 2025-02-14 01:52:59,133 [87] [DEBUG] [workers.queueworker] No more work. securityscanningnotificationworker stdout | 2025-02-14 01:52:59,133 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:52:59,133 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:53:59 UTC)" executed successfully securityworker stdout | 2025-02-14 01:52:59,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:52:59,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:24.231161+00:00 (in 24.998419 seconds) securityworker stdout | 2025-02-14 01:52:59,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:29 UTC)" (scheduled at 2025-02-14 01:52:59.232325+00:00) securityworker stdout | 2025-02-14 01:52:59,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:52:59,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:52:59,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:52:59,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:52:59,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:52:59,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:52:59,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:52:59,244 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:52:59,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:59,247 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:59,247 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:52:59,247 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:52:59,247 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:52:59,247 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:52:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:52:59,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:52:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:52:59,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:52:59,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:52:59,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:52:59,248 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 47, 59, 236430), 1, 2]) securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:52:59,251 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 47, 59, 236430), 1, 2]) securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:59,254 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:52:59,254 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:52:59,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:52:59,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:52:59,254 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:52:59,254 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:52:59,254 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:52:59,254 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:52:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:52:59,254 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:29 UTC)" executed successfully gcworker stdout | 2025-02-14 01:52:59,752 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} gunicorn-web stdout | 2025-02-14 01:53:01,305 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:53:01,308 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:53:01,312 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:53:01,316 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:53:01,318 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:53:01,490 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:53:02,335 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:53:02,703 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} gunicorn-secscan stdout | 2025-02-14 01:53:03,221 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:53:03,224 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:53:03,226 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:53:04,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:53:04,001 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:34.000511+00:00 (in 29.999487 seconds) buildlogsarchiver stdout | 2025-02-14 01:53:04,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:34 UTC)" (scheduled at 2025-02-14 01:53:04.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:53:04,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 53, 4, 1321), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:53:04,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:53:04,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:53:04,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:34 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:53:04,512 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:53:04,518 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:53:04,521 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:53:04,526 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:53:04,528 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:53:04,531 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:53:04,534 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:53:04,573 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:53:04,584 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:53:05,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:53:05,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:07.807092+00:00 (in 2.002949 seconds) notificationworker stdout | 2025-02-14 01:53:05,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:15 UTC)" (scheduled at 2025-02-14 01:53:05.803718+00:00) notificationworker stdout | 2025-02-14 01:53:05,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:53:05,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 53, 5, 804410), True, datetime.datetime(2025, 2, 14, 1, 53, 5, 804410), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:53:05,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:53:05,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:53:05,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:15 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:53:05,899 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:53:05,899 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:52.900596+00:00 (in 47.001295 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:53:05,899 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:05 UTC)" (scheduled at 2025-02-14 01:53:05.898886+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:53:05,900 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."subject_backfilled" = %s) OR ("t1"."subject_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:53:05,908 [71] [DEBUG] [__main__] Manifest subject backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:53:05,908 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:53:05,908 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:05 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:53:07,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:53:07,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:15.803718+00:00 (in 7.996186 seconds) notificationworker stdout | 2025-02-14 01:53:07,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:07 UTC)" (scheduled at 2025-02-14 01:53:07.807092+00:00) notificationworker stdout | 2025-02-14 01:53:07,807 [75] [DEBUG] [workers.queueworker] Running watchdog. notificationworker stdout | 2025-02-14 01:53:07,807 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:07 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:53:12,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:53:12,506 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:45.503718+00:00 (in 32.997541 seconds) namespacegcworker stdout | 2025-02-14 01:53:12,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:12 UTC)" (scheduled at 2025-02-14 01:53:12.505687+00:00) namespacegcworker stdout | 2025-02-14 01:53:12,506 [73] [DEBUG] [workers.queueworker] Running watchdog. namespacegcworker stdout | 2025-02-14 01:53:12,506 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:12 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:53:14,007 [242] [DEBUG] [app] Starting request: urn:request:2ac13b0f-1fd9-4991-baa5-a57eece784cd (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:53:14,009 [243] [DEBUG] [app] Starting request: urn:request:62501281-7c90-4964-8df1-a10cd4ea70e6 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:53:14,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:14,010 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:14,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:53:14,012 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:53:14,013 [252] [DEBUG] [app] Starting request: urn:request:3ce25436-a5c0-416b-a390-487d8da35f14 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:53:14,013 [252] [DEBUG] [app] Ending request: urn:request:3ce25436-a5c0-416b-a390-487d8da35f14 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:3ce25436-a5c0-416b-a390-487d8da35f14', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:53:14,013 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-registry stdout | 2025-02-14 01:53:14,013 [246] [DEBUG] [app] Starting request: urn:request:ff443629-e98d-4ba9-a84c-ade57d923cdc (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:53:14,014 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:53:14,014 [246] [DEBUG] [app] Ending request: urn:request:ff443629-e98d-4ba9-a84c-ade57d923cdc (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:ff443629-e98d-4ba9-a84c-ade57d923cdc', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:53:14,014 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:53:14,014 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:53:14,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:14,016 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:14,017 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:53:14,018 [245] [DEBUG] [app] Starting request: urn:request:eaa12273-fea5-4b1f-a636-518dcbbbfa6b (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:53:14,018 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:53:14,018 [245] [DEBUG] [app] Ending request: urn:request:eaa12273-fea5-4b1f-a636-518dcbbbfa6b (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:eaa12273-fea5-4b1f-a636-518dcbbbfa6b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:53:14,018 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:53:14,019 [242] [DEBUG] [app] Starting request: urn:request:4f275de4-026f-4834-8e28-c49f965f5da9 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:53:14,019 [242] [DEBUG] [app] Ending request: urn:request:4f275de4-026f-4834-8e28-c49f965f5da9 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:4f275de4-026f-4834-8e28-c49f965f5da9', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:53:14,019 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:53:14,019 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:53:14,019 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:53:14,020 [243] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:53:14,020 [243] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:53:14,020 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:53:14,020 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:53:14,025 [243] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:53:14,025 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:53:14,025 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:53:14,025 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:53:14,032 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:53:14,032 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:53:14,035 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:53:14,035 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:53:14,037 [243] [DEBUG] [app] Ending request: urn:request:62501281-7c90-4964-8df1-a10cd4ea70e6 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:62501281-7c90-4964-8df1-a10cd4ea70e6', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:53:14,037 [243] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:53:14,037 [242] [DEBUG] [app] Ending request: urn:request:2ac13b0f-1fd9-4991-baa5-a57eece784cd (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:2ac13b0f-1fd9-4991-baa5-a57eece784cd', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:53:14,037 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:53:14,037 [243] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:53:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:53:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.029 47 0.030) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:53:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.031) gunicorn-web stdout | 2025-02-14 01:53:14,038 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:53:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" exportactionlogsworker stdout | 2025-02-14 01:53:14,739 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:53:14,861 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} notificationworker stdout | 2025-02-14 01:53:15,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:53:15,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:25.803718+00:00 (in 9.999571 seconds) notificationworker stdout | 2025-02-14 01:53:15,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:25 UTC)" (scheduled at 2025-02-14 01:53:15.803718+00:00) notificationworker stdout | 2025-02-14 01:53:15,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:53:15,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 53, 15, 804409), True, datetime.datetime(2025, 2, 14, 1, 53, 15, 804409), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:53:15,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:53:15,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:53:15,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:25 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:53:16,888 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} namespacegcworker stdout | 2025-02-14 01:53:20,304 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:53:20,681 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:53:22,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:53:22,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:52.310342+00:00 (in 29.999560 seconds) autopruneworker stdout | 2025-02-14 01:53:22,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:52 UTC)" (scheduled at 2025-02-14 01:53:22.310342+00:00) autopruneworker stdout | 2025-02-14 01:53:22,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494402316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:53:22,321 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:53:22,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:53:22,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:52 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:53:22,444 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} buildlogsarchiver stdout | 2025-02-14 01:53:23,328 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:53:24,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:53:24,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:29.232325+00:00 (in 5.000719 seconds) securityworker stdout | 2025-02-14 01:53:24,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:54 UTC)" (scheduled at 2025-02-14 01:53:24.231161+00:00) securityworker stdout | 2025-02-14 01:53:24,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:53:24,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:53:24,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:53:24,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:53:24,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:53:24,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:53:24,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:54 UTC)" executed successfully gcworker stdout | 2025-02-14 01:53:24,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:53:24,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:25.392556+00:00 (in 1.001720 seconds) gcworker stdout | 2025-02-14 01:53:24,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:54 UTC)" (scheduled at 2025-02-14 01:53:24.390410+00:00) gcworker stdout | 2025-02-14 01:53:24,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:53:24,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:54 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:53:25,163 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} exportactionlogsworker stdout | 2025-02-14 01:53:25,215 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:53:25,215 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:30.212654+00:00 (in 4.996966 seconds) exportactionlogsworker stdout | 2025-02-14 01:53:25,215 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:25 UTC)" (scheduled at 2025-02-14 01:53:25.215238+00:00) exportactionlogsworker stdout | 2025-02-14 01:53:25,215 [63] [DEBUG] [workers.queueworker] Running watchdog. exportactionlogsworker stdout | 2025-02-14 01:53:25,216 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:25 UTC)" executed successfully gcworker stdout | 2025-02-14 01:53:25,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:53:25,393 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:54.390410+00:00 (in 28.997422 seconds) gcworker stdout | 2025-02-14 01:53:25,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:55 UTC)" (scheduled at 2025-02-14 01:53:25.392556+00:00) gcworker stdout | 2025-02-14 01:53:25,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:53:25,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497705402, None, 1, 0]) gcworker stdout | 2025-02-14 01:53:25,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:53:25,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:55 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:53:25,546 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:53:25,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:53:25,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:35.803718+00:00 (in 9.999545 seconds) notificationworker stdout | 2025-02-14 01:53:25,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:35 UTC)" (scheduled at 2025-02-14 01:53:25.803718+00:00) notificationworker stdout | 2025-02-14 01:53:25,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:53:25,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 53, 25, 804468), True, datetime.datetime(2025, 2, 14, 1, 53, 25, 804468), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:53:25,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:53:25,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:53:25,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:35 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:53:26,331 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:53:26,806 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:53:27,134 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:53:27,466 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:53:27,815 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:53:27,978 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:53:28,250 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:53:28,497 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:53:28,601 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} gunicorn-web stdout | 2025-02-14 01:53:29,006 [242] [DEBUG] [app] Starting request: urn:request:75795ae5-a6a7-4b71-9c1f-839c9a1ced91 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:53:29,007 [243] [DEBUG] [app] Starting request: urn:request:bc9875c9-088b-4d5f-ad56-d7d2f4981384 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:53:29,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:29,008 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:29,010 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:53:29,010 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:53:29,011 [253] [DEBUG] [app] Starting request: urn:request:7253a602-d530-430f-a159-28316b89461e (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:53:29,011 [252] [DEBUG] [app] Starting request: urn:request:896730cb-d26f-4e1e-9c5c-8d838acda127 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:53:29,012 [252] [DEBUG] [app] Ending request: urn:request:896730cb-d26f-4e1e-9c5c-8d838acda127 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:896730cb-d26f-4e1e-9c5c-8d838acda127', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:53:29,012 [253] [DEBUG] [app] Ending request: urn:request:7253a602-d530-430f-a159-28316b89461e (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:7253a602-d530-430f-a159-28316b89461e', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:53:29,012 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-registry stdout | 2025-02-14 01:53:29,012 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:53:29,012 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:53:29,012 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-web stdout | 2025-02-14 01:53:29,013 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:29,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:29,015 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:53:29,015 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:53:29,016 [243] [DEBUG] [app] Starting request: urn:request:ee105da9-bf8e-4dab-9913-f47cd70a8fd5 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:53:29,016 [244] [DEBUG] [app] Starting request: urn:request:f52482c2-4963-4584-8c74-9a76f1fbd7f8 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:53:29,016 [243] [DEBUG] [app] Ending request: urn:request:ee105da9-bf8e-4dab-9913-f47cd70a8fd5 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:ee105da9-bf8e-4dab-9913-f47cd70a8fd5', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:53:29,017 [244] [DEBUG] [app] Ending request: urn:request:f52482c2-4963-4584-8c74-9a76f1fbd7f8 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:f52482c2-4963-4584-8c74-9a76f1fbd7f8', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:53:29,017 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:53:29,017 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:53:29,017 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:53:29,017 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:53:29,017 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:53:29,017 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:53:29,017 [243] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:53:29,017 [243] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:53:29,023 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:53:29,023 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:53:29,023 [243] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:53:29,023 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:53:29,030 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:53:29,030 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:53:29,032 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:53:29,032 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:53:29,034 [242] [DEBUG] [app] Ending request: urn:request:75795ae5-a6a7-4b71-9c1f-839c9a1ced91 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:75795ae5-a6a7-4b71-9c1f-839c9a1ced91', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:53:29,035 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:53:29,035 [243] [DEBUG] [app] Ending request: urn:request:bc9875c9-088b-4d5f-ad56-d7d2f4981384 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:bc9875c9-088b-4d5f-ad56-d7d2f4981384', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:53:29,035 [243] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:53:29,035 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:53:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:53:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.029) gunicorn-web stdout | 2025-02-14 01:53:29,035 [243] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:53:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:53:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.029 47 0.030) repositorygcworker stdout | 2025-02-14 01:53:29,064 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} securityworker stdout | 2025-02-14 01:53:29,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:53:29,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:54.231161+00:00 (in 24.998390 seconds) securityworker stdout | 2025-02-14 01:53:29,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:59 UTC)" (scheduled at 2025-02-14 01:53:29.232325+00:00) securityworker stdout | 2025-02-14 01:53:29,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:53:29,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:53:29,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:53:29,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:53:29,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:53:29,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:53:29,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:53:29,244 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:53:29,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:53:29,248 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 48, 29, 236412), 1, 2]) securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:53:29,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:53:29,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:53:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:53:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:53:29,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:53:29,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:53:29,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:53:29,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:53:29,251 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:53:29,251 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:53:29,251 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:53:29,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 48, 29, 236412), 1, 2]) securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:53:29,254 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:53:29,254 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:53:29,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:53:29,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:53:29,254 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:53:29,254 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:53:29,254 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:53:29,254 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:53:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:53:29,254 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:53:59 UTC)" executed successfully gcworker stdout | 2025-02-14 01:53:29,789 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} exportactionlogsworker stdout | 2025-02-14 01:53:30,212 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:53:30,213 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:25.215238+00:00 (in 55.002135 seconds) exportactionlogsworker stdout | 2025-02-14 01:53:30,213 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:30 UTC)" (scheduled at 2025-02-14 01:53:30.212654+00:00) exportactionlogsworker stdout | 2025-02-14 01:53:30,213 [63] [DEBUG] [workers.queueworker] Getting work item from queue. exportactionlogsworker stdout | 2025-02-14 01:53:30,214 [63] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 53, 30, 213437), True, datetime.datetime(2025, 2, 14, 1, 53, 30, 213437), 0, 'exportactionlogs/%', 50, 1, 0]) exportactionlogsworker stdout | 2025-02-14 01:53:30,223 [63] [DEBUG] [workers.queueworker] No more work. exportactionlogsworker stdout | 2025-02-14 01:53:30,223 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:53:30,223 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:30 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:53:31,314 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:53:31,317 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:53:31,319 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:53:31,324 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:53:31,327 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:53:31,526 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:53:32,371 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:53:32,732 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} repositorygcworker stdout | 2025-02-14 01:53:33,011 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:53:33,012 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:36.014770+00:00 (in 3.002638 seconds) repositorygcworker stdout | 2025-02-14 01:53:33,012 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:33 UTC)" (scheduled at 2025-02-14 01:53:33.011632+00:00) repositorygcworker stdout | 2025-02-14 01:53:33,012 [85] [DEBUG] [workers.queueworker] Getting work item from queue. repositorygcworker stdout | 2025-02-14 01:53:33,013 [85] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 53, 33, 12468), True, datetime.datetime(2025, 2, 14, 1, 53, 33, 12468), 0, 'repositorygc/%', 50, 1, 0]) repositorygcworker stdout | 2025-02-14 01:53:33,022 [85] [DEBUG] [workers.queueworker] No more work. repositorygcworker stdout | 2025-02-14 01:53:33,022 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:53:33,022 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:33 UTC)" executed successfully gunicorn-secscan stdout | 2025-02-14 01:53:33,230 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:53:33,232 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:53:33,235 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:53:34,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:53:34,001 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:04.000511+00:00 (in 29.999502 seconds) buildlogsarchiver stdout | 2025-02-14 01:53:34,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:04 UTC)" (scheduled at 2025-02-14 01:53:34.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:53:34,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 53, 34, 1292), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:53:34,010 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:53:34,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:53:34,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:04 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:53:34,523 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:53:34,527 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:53:34,531 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:53:34,536 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:53:34,538 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:53:34,541 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:53:34,543 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:53:34,582 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:53:34,594 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:53:35,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:53:35,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:45.803718+00:00 (in 9.999576 seconds) notificationworker stdout | 2025-02-14 01:53:35,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:45 UTC)" (scheduled at 2025-02-14 01:53:35.803718+00:00) notificationworker stdout | 2025-02-14 01:53:35,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:53:35,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 53, 35, 804421), True, datetime.datetime(2025, 2, 14, 1, 53, 35, 804421), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:53:35,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:53:35,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:53:35,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:45 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:53:36,015 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:53:36,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:33.011632+00:00 (in 56.996361 seconds) repositorygcworker stdout | 2025-02-14 01:53:36,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:36 UTC)" (scheduled at 2025-02-14 01:53:36.014770+00:00) repositorygcworker stdout | 2025-02-14 01:53:36,015 [85] [DEBUG] [workers.queueworker] Running watchdog. repositorygcworker stdout | 2025-02-14 01:53:36,015 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:36 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:53:44,006 [242] [DEBUG] [app] Starting request: urn:request:399b0884-5e09-4c6f-b255-bf185b2ed71f (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:53:44,008 [245] [DEBUG] [app] Starting request: urn:request:4cd35e2b-4142-4338-b22d-e4ea01f2591b (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:53:44,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:44,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:44,010 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:53:44,011 [246] [DEBUG] [app] Starting request: urn:request:a3060995-0641-4bd1-8a2c-2cff3fbf9b70 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:53:44,011 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:53:44,012 [246] [DEBUG] [app] Ending request: urn:request:a3060995-0641-4bd1-8a2c-2cff3fbf9b70 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:a3060995-0641-4bd1-8a2c-2cff3fbf9b70', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:53:44,012 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-web stdout | 2025-02-14 01:53:44,012 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:53:44,012 [251] [DEBUG] [app] Starting request: urn:request:438d4c61-24b5-4ef6-91f3-7122649f0e9b (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:53:44,013 [251] [DEBUG] [app] Ending request: urn:request:438d4c61-24b5-4ef6-91f3-7122649f0e9b (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:438d4c61-24b5-4ef6-91f3-7122649f0e9b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:53:44,013 [251] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:53:44,013 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:53:44,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:44,014 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:44,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:53:44,016 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:53:44,017 [244] [DEBUG] [app] Starting request: urn:request:1f18e8a0-7a1b-4fdf-8313-57f4957e64e0 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:53:44,017 [244] [DEBUG] [app] Ending request: urn:request:1f18e8a0-7a1b-4fdf-8313-57f4957e64e0 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:1f18e8a0-7a1b-4fdf-8313-57f4957e64e0', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:53:44,017 [242] [DEBUG] [app] Starting request: urn:request:4fb80855-cde5-49c3-bc2c-dc815c08d1c5 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:53:44,017 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:53:44,017 [242] [DEBUG] [app] Ending request: urn:request:4fb80855-cde5-49c3-bc2c-dc815c08d1c5 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:4fb80855-cde5-49c3-bc2c-dc815c08d1c5', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:53:44,018 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:53:44,018 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:53:44,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:53:44,018 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:53:44,018 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:53:44,018 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:53:44,018 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:53:44,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:53:44,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:53:44,024 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:53:44,024 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:53:44,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:53:44,031 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:53:44,033 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:53:44,033 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:53:44,036 [242] [DEBUG] [app] Ending request: urn:request:399b0884-5e09-4c6f-b255-bf185b2ed71f (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:399b0884-5e09-4c6f-b255-bf185b2ed71f', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:53:44,036 [245] [DEBUG] [app] Ending request: urn:request:4cd35e2b-4142-4338-b22d-e4ea01f2591b (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:4cd35e2b-4142-4338-b22d-e4ea01f2591b', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:53:44,036 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:53:44,036 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:53:44,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:53:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:53:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.030) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:53:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.029 47 0.030) gunicorn-web stdout | 2025-02-14 01:53:44,036 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:53:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" exportactionlogsworker stdout | 2025-02-14 01:53:44,776 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:53:44,871 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} namespacegcworker stdout | 2025-02-14 01:53:45,503 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:53:45,504 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:58.505410+00:00 (in 13.001193 seconds) namespacegcworker stdout | 2025-02-14 01:53:45,504 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:45 UTC)" (scheduled at 2025-02-14 01:53:45.503718+00:00) namespacegcworker stdout | 2025-02-14 01:53:45,504 [73] [DEBUG] [workers.queueworker] Getting work item from queue. namespacegcworker stdout | 2025-02-14 01:53:45,505 [73] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 53, 45, 504553), True, datetime.datetime(2025, 2, 14, 1, 53, 45, 504553), 0, 'namespacegc/%', 50, 1, 0]) namespacegcworker stdout | 2025-02-14 01:53:45,514 [73] [DEBUG] [workers.queueworker] No more work. namespacegcworker stdout | 2025-02-14 01:53:45,514 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:53:45,514 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:45 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:53:45,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:53:45,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:55.803718+00:00 (in 9.999560 seconds) notificationworker stdout | 2025-02-14 01:53:45,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:55 UTC)" (scheduled at 2025-02-14 01:53:45.803718+00:00) notificationworker stdout | 2025-02-14 01:53:45,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:53:45,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 53, 45, 804434), True, datetime.datetime(2025, 2, 14, 1, 53, 45, 804434), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:53:45,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:53:45,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:53:45,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:53:55 UTC)" executed successfully quotaregistrysizeworker stdout | 2025-02-14 01:53:46,009 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:53:46,010 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:46.009738+00:00 (in 59.999573 seconds) quotaregistrysizeworker stdout | 2025-02-14 01:53:46,010 [78] [INFO] [apscheduler.executors.default] Running job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:46 UTC)" (scheduled at 2025-02-14 01:53:46.009738+00:00) quotaregistrysizeworker stdout | 2025-02-14 01:53:46,010 [78] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."size_bytes", "t1"."running", "t1"."queued", "t1"."completed_ms" FROM "quotaregistrysize" AS "t1" LIMIT %s OFFSET %s', [1, 0]) quotaregistrysizeworker stdout | 2025-02-14 01:53:46,018 [78] [DEBUG] [data.database] Disconnecting from database. quotaregistrysizeworker stdout | 2025-02-14 01:53:46,018 [78] [INFO] [apscheduler.executors.default] Job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:46 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:53:46,924 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:53:48,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:53:48,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:59.123196+00:00 (in 10.997567 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:53:48,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:48 UTC)" (scheduled at 2025-02-14 01:53:48.125163+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:53:48,125 [87] [DEBUG] [workers.queueworker] Running watchdog. securityscanningnotificationworker stdout | 2025-02-14 01:53:48,125 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:48 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:53:50,328 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:53:50,694 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:53:52,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:53:52,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:22.310342+00:00 (in 29.999546 seconds) autopruneworker stdout | 2025-02-14 01:53:52,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:22 UTC)" (scheduled at 2025-02-14 01:53:52.310342+00:00) autopruneworker stdout | 2025-02-14 01:53:52,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494432316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:53:52,320 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:53:52,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:53:52,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:22 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:53:52,480 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} manifestsubjectbackfillworker stdout | 2025-02-14 01:53:52,900 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:53:52,901 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:05.898886+00:00 (in 12.997868 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:53:52,901 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:52 UTC)" (scheduled at 2025-02-14 01:53:52.900596+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:53:52,901 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."artifact_type_backfilled" = %s) OR ("t1"."artifact_type_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:53:52,910 [71] [DEBUG] [__main__] Manifest artifact_type backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:53:52,910 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:53:52,910 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:53:53,364 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:53:54,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:53:54,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:59.232325+00:00 (in 5.000736 seconds) securityworker stdout | 2025-02-14 01:53:54,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:24 UTC)" (scheduled at 2025-02-14 01:53:54.231161+00:00) securityworker stdout | 2025-02-14 01:53:54,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:53:54,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:53:54,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:53:54,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:53:54,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:53:54,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:53:54,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:53:54,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:53:54,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:53:55.392556+00:00 (in 1.001710 seconds) gcworker stdout | 2025-02-14 01:53:54,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:24 UTC)" (scheduled at 2025-02-14 01:53:54.390410+00:00) gcworker stdout | 2025-02-14 01:53:54,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:53:54,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:24 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:53:55,199 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} gcworker stdout | 2025-02-14 01:53:55,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:53:55,393 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:24.390410+00:00 (in 28.997419 seconds) gcworker stdout | 2025-02-14 01:53:55,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:25 UTC)" (scheduled at 2025-02-14 01:53:55.392556+00:00) gcworker stdout | 2025-02-14 01:53:55,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:53:55,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497735402, None, 1, 0]) gcworker stdout | 2025-02-14 01:53:55,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:53:55,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:53:55,582 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:53:55,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:53:55,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:05.803718+00:00 (in 9.999561 seconds) notificationworker stdout | 2025-02-14 01:53:55,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:05 UTC)" (scheduled at 2025-02-14 01:53:55.803718+00:00) notificationworker stdout | 2025-02-14 01:53:55,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:53:55,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 53, 55, 804433), True, datetime.datetime(2025, 2, 14, 1, 53, 55, 804433), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:53:55,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:53:55,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:53:55,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:05 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:53:56,367 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:53:56,833 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:53:57,142 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:53:57,502 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:53:57,848 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:53:58,002 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:53:58,286 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:53:58,504 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} namespacegcworker stdout | 2025-02-14 01:53:58,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:53:58,505 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:12.505687+00:00 (in 13.999834 seconds) namespacegcworker stdout | 2025-02-14 01:53:58,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:58:58 UTC)" (scheduled at 2025-02-14 01:53:58.505410+00:00) namespacegcworker stdout | 2025-02-14 01:53:58,506 [73] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [False, datetime.datetime(2025, 2, 14, 1, 53, 58, 506124), 'namespacegc/%']) namespacegcworker stdout | 2025-02-14 01:53:58,515 [73] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 53, 58, 506124), True, datetime.datetime(2025, 2, 14, 1, 53, 58, 506124), 0, 'namespacegc/%']) namespacegcworker stdout | 2025-02-14 01:53:58,518 [73] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) AND NOT ("t1"."queue_name" IN (SELECT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s)))))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 53, 58, 506124), True, datetime.datetime(2025, 2, 14, 1, 53, 58, 506124), 0, 'namespacegc/%', False, datetime.datetime(2025, 2, 14, 1, 53, 58, 506124), 'namespacegc/%']) namespacegcworker stdout | 2025-02-14 01:53:58,520 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:53:58,520 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:58:58 UTC)" executed successfully repositoryactioncounter stdout | 2025-02-14 01:53:58,624 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} gunicorn-web stdout | 2025-02-14 01:53:59,008 [242] [DEBUG] [app] Starting request: urn:request:2615fd89-ce37-4fc3-b454-35d15be77b11 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:53:59,008 [243] [DEBUG] [app] Starting request: urn:request:8c1bb595-0378-4396-a4d9-2bf7daaaa292 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:53:59,010 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:59,010 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:59,012 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:53:59,013 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:53:59,014 [246] [DEBUG] [app] Starting request: urn:request:5feea858-91b1-4c8d-833c-2d63f985268f (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:53:59,014 [253] [DEBUG] [app] Starting request: urn:request:26751568-2076-4a1d-bdeb-b604a90b8cd1 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:53:59,014 [246] [DEBUG] [app] Ending request: urn:request:5feea858-91b1-4c8d-833c-2d63f985268f (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:5feea858-91b1-4c8d-833c-2d63f985268f', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:53:59,014 [253] [DEBUG] [app] Ending request: urn:request:26751568-2076-4a1d-bdeb-b604a90b8cd1 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:26751568-2076-4a1d-bdeb-b604a90b8cd1', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:53:59,015 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:53:59,015 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:53:59,015 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:53:59,015 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:53:59,017 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:59,017 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:53:59,019 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:53:59,019 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:53:59,020 [244] [DEBUG] [app] Starting request: urn:request:abcb6a82-5c88-4a8f-b038-7e00dea6fccf (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:53:59,020 [244] [DEBUG] [app] Ending request: urn:request:abcb6a82-5c88-4a8f-b038-7e00dea6fccf (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:abcb6a82-5c88-4a8f-b038-7e00dea6fccf', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:53:59,020 [245] [DEBUG] [app] Starting request: urn:request:8cd3e73b-bcde-4527-9fe0-b937012102c1 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:53:59,020 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:53:59,021 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:53:59,021 [245] [DEBUG] [app] Ending request: urn:request:8cd3e73b-bcde-4527-9fe0-b937012102c1 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:8cd3e73b-bcde-4527-9fe0-b937012102c1', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:53:59,021 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:53:59,021 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:53:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:53:59,021 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:53:59,021 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:53:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.002) gunicorn-web stdout | 2025-02-14 01:53:59,022 [243] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:53:59,022 [243] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:53:59,027 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:53:59,027 [243] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:53:59,027 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:53:59,027 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:53:59,034 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:53:59,034 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:53:59,036 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:53:59,036 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:53:59,039 [243] [DEBUG] [app] Ending request: urn:request:8c1bb595-0378-4396-a4d9-2bf7daaaa292 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:8c1bb595-0378-4396-a4d9-2bf7daaaa292', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:53:59,039 [242] [DEBUG] [app] Ending request: urn:request:2615fd89-ce37-4fc3-b454-35d15be77b11 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:2615fd89-ce37-4fc3-b454-35d15be77b11', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:53:59,039 [243] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:53:59,039 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:53:59,039 [243] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:53:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:53:59,039 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:53:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:53:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.033) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:53:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.032) repositorygcworker stdout | 2025-02-14 01:53:59,101 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} securityscanningnotificationworker stdout | 2025-02-14 01:53:59,123 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:53:59,123 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:48.125163+00:00 (in 49.001546 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:53:59,123 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:59 UTC)" (scheduled at 2025-02-14 01:53:59.123196+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:53:59,123 [87] [DEBUG] [workers.queueworker] Getting work item from queue. securityscanningnotificationworker stdout | 2025-02-14 01:53:59,124 [87] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 53, 59, 123916), True, datetime.datetime(2025, 2, 14, 1, 53, 59, 123916), 0, 'secscanv4/%', 50, 1, 0]) securityscanningnotificationworker stdout | 2025-02-14 01:53:59,133 [87] [DEBUG] [workers.queueworker] No more work. securityscanningnotificationworker stdout | 2025-02-14 01:53:59,134 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:53:59,134 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:54:59 UTC)" executed successfully securityworker stdout | 2025-02-14 01:53:59,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:53:59,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:24.231161+00:00 (in 24.998417 seconds) securityworker stdout | 2025-02-14 01:53:59,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:29 UTC)" (scheduled at 2025-02-14 01:53:59.232325+00:00) securityworker stdout | 2025-02-14 01:53:59,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:53:59,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:53:59,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:53:59,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:53:59,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:53:59,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:53:59,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:53:59,245 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:53:59,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:53:59,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:53:59,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:53:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:53:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:53:59,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:53:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:53:59,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:53:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:53:59,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:53:59,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:53:59,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:53:59,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 48, 59, 236411), 1, 2]) securityworker stdout | 2025-02-14 01:53:59,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:53:59,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:53:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:53:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:53:59,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:53:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:53:59,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:53:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:53:59,251 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:53:59,251 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:53:59,251 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:53:59,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 48, 59, 236411), 1, 2]) securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:53:59,254 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:53:59,254 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:53:59,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:53:59,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:53:59,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:53:59,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:53:59,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:53:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:53:59,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:53:59,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:29 UTC)" executed successfully gcworker stdout | 2025-02-14 01:53:59,798 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} gunicorn-web stdout | 2025-02-14 01:54:01,322 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:54:01,325 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:54:01,327 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:54:01,332 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:54:01,335 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:54:01,549 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:54:02,406 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:54:02,761 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} gunicorn-secscan stdout | 2025-02-14 01:54:03,238 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:54:03,240 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:54:03,243 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:54:04,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:54:04,001 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:34.000511+00:00 (in 29.999504 seconds) buildlogsarchiver stdout | 2025-02-14 01:54:04,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:34 UTC)" (scheduled at 2025-02-14 01:54:04.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:54:04,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 54, 4, 1296), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:54:04,010 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:54:04,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:54:04,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:34 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:54:04,533 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:54:04,537 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:54:04,541 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:54:04,546 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:54:04,548 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:54:04,551 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:54:04,553 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:54:04,591 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:54:04,602 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:54:05,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:54:05,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:07.807092+00:00 (in 2.002934 seconds) notificationworker stdout | 2025-02-14 01:54:05,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:15 UTC)" (scheduled at 2025-02-14 01:54:05.803718+00:00) notificationworker stdout | 2025-02-14 01:54:05,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:54:05,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 54, 5, 804432), True, datetime.datetime(2025, 2, 14, 1, 54, 5, 804432), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:54:05,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:54:05,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:54:05,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:15 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:54:05,899 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:54:05,899 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:52.900596+00:00 (in 47.001309 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:54:05,899 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:05 UTC)" (scheduled at 2025-02-14 01:54:05.898886+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:54:05,899 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."subject_backfilled" = %s) OR ("t1"."subject_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:54:05,908 [71] [DEBUG] [__main__] Manifest subject backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:54:05,908 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:54:05,908 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:05 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:54:07,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:54:07,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:15.803718+00:00 (in 7.996164 seconds) notificationworker stdout | 2025-02-14 01:54:07,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:07 UTC)" (scheduled at 2025-02-14 01:54:07.807092+00:00) notificationworker stdout | 2025-02-14 01:54:07,807 [75] [DEBUG] [workers.queueworker] Running watchdog. notificationworker stdout | 2025-02-14 01:54:07,807 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:07 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:54:12,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:54:12,506 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:45.503718+00:00 (in 32.997509 seconds) namespacegcworker stdout | 2025-02-14 01:54:12,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:12 UTC)" (scheduled at 2025-02-14 01:54:12.505687+00:00) namespacegcworker stdout | 2025-02-14 01:54:12,506 [73] [DEBUG] [workers.queueworker] Running watchdog. namespacegcworker stdout | 2025-02-14 01:54:12,506 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:12 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:54:14,008 [244] [DEBUG] [app] Starting request: urn:request:00930819-1acd-4ef3-9746-9e228182db31 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:54:14,008 [242] [DEBUG] [app] Starting request: urn:request:98e7a750-cafe-4738-a527-7f1db615618c (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:54:14,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:14,009 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:14,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:14,012 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:54:14,013 [253] [DEBUG] [app] Starting request: urn:request:15c7dbcb-6877-4795-a9da-eb34fc041198 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:54:14,013 [246] [DEBUG] [app] Starting request: urn:request:c5bd11e7-5b30-4b2c-a513-ce5afc7ede03 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:54:14,013 [253] [DEBUG] [app] Ending request: urn:request:15c7dbcb-6877-4795-a9da-eb34fc041198 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:15c7dbcb-6877-4795-a9da-eb34fc041198', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-registry stdout | 2025-02-14 01:54:14,014 [246] [DEBUG] [app] Ending request: urn:request:c5bd11e7-5b30-4b2c-a513-ce5afc7ede03 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:c5bd11e7-5b30-4b2c-a513-ce5afc7ede03', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:54:14,014 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:54:14,014 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:54:14,014 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:54:14,014 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:54:14,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:14,015 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:14,017 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:14,017 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:14,018 [245] [DEBUG] [app] Starting request: urn:request:b84f3788-b240-4b66-8f8d-5d070d134896 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:54:14,018 [243] [DEBUG] [app] Starting request: urn:request:3433e35b-1ba8-455d-b786-f4896a1a8de9 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:54:14,018 [245] [DEBUG] [app] Ending request: urn:request:b84f3788-b240-4b66-8f8d-5d070d134896 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:b84f3788-b240-4b66-8f8d-5d070d134896', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:54:14,018 [243] [DEBUG] [app] Ending request: urn:request:3433e35b-1ba8-455d-b786-f4896a1a8de9 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:3433e35b-1ba8-455d-b786-f4896a1a8de9', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:54:14,018 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:54:14,019 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:54:14,019 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:54:14,019 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:54:14,019 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:54:14,019 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:54:14,019 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:54:14,019 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:54:14,025 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:54:14,025 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:54:14,025 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:54:14,025 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:54:14,031 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:54:14,032 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:54:14,034 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:54:14,034 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:54:14,036 [244] [DEBUG] [app] Ending request: urn:request:00930819-1acd-4ef3-9746-9e228182db31 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:00930819-1acd-4ef3-9746-9e228182db31', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:54:14,036 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:54:14,037 [242] [DEBUG] [app] Ending request: urn:request:98e7a750-cafe-4738-a527-7f1db615618c (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:98e7a750-cafe-4738-a527-7f1db615618c', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:54:14,037 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:54:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:54:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:54:14,037 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:54:14,037 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:54:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:54:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.031) exportactionlogsworker stdout | 2025-02-14 01:54:14,812 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:54:14,906 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} notificationworker stdout | 2025-02-14 01:54:15,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:54:15,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:25.803718+00:00 (in 9.999567 seconds) notificationworker stdout | 2025-02-14 01:54:15,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:25 UTC)" (scheduled at 2025-02-14 01:54:15.803718+00:00) notificationworker stdout | 2025-02-14 01:54:15,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:54:15,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 54, 15, 804484), True, datetime.datetime(2025, 2, 14, 1, 54, 15, 804484), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:54:15,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:54:15,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:54:15,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:25 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:54:16,960 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} namespacegcworker stdout | 2025-02-14 01:54:20,353 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:54:20,730 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:54:22,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:54:22,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:52.310342+00:00 (in 29.999556 seconds) autopruneworker stdout | 2025-02-14 01:54:22,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:52 UTC)" (scheduled at 2025-02-14 01:54:22.310342+00:00) autopruneworker stdout | 2025-02-14 01:54:22,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494462316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:54:22,321 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:54:22,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:54:22,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:52 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:54:22,497 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} buildlogsarchiver stdout | 2025-02-14 01:54:23,395 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:54:24,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:54:24,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:29.232325+00:00 (in 5.000706 seconds) securityworker stdout | 2025-02-14 01:54:24,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:54 UTC)" (scheduled at 2025-02-14 01:54:24.231161+00:00) securityworker stdout | 2025-02-14 01:54:24,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:54:24,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:54:24,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:54:24,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:54:24,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:54:24,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:54:24,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:54 UTC)" executed successfully gcworker stdout | 2025-02-14 01:54:24,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:54:24,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:25.392556+00:00 (in 1.001721 seconds) gcworker stdout | 2025-02-14 01:54:24,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:54 UTC)" (scheduled at 2025-02-14 01:54:24.390410+00:00) gcworker stdout | 2025-02-14 01:54:24,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:54:24,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:54 UTC)" executed successfully exportactionlogsworker stdout | 2025-02-14 01:54:25,215 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:54:25,215 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:30.212654+00:00 (in 4.996945 seconds) exportactionlogsworker stdout | 2025-02-14 01:54:25,215 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:25 UTC)" (scheduled at 2025-02-14 01:54:25.215238+00:00) exportactionlogsworker stdout | 2025-02-14 01:54:25,215 [63] [DEBUG] [workers.queueworker] Running watchdog. exportactionlogsworker stdout | 2025-02-14 01:54:25,216 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:25 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:54:25,235 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} gcworker stdout | 2025-02-14 01:54:25,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:54:25,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:54.390410+00:00 (in 28.997431 seconds) gcworker stdout | 2025-02-14 01:54:25,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:55 UTC)" (scheduled at 2025-02-14 01:54:25.392556+00:00) gcworker stdout | 2025-02-14 01:54:25,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:54:25,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497765401, None, 1, 0]) gcworker stdout | 2025-02-14 01:54:25,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:54:25,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:55 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:54:25,616 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:54:25,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:54:25,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:35.803718+00:00 (in 9.999571 seconds) notificationworker stdout | 2025-02-14 01:54:25,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:35 UTC)" (scheduled at 2025-02-14 01:54:25.803718+00:00) notificationworker stdout | 2025-02-14 01:54:25,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:54:25,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 54, 25, 804417), True, datetime.datetime(2025, 2, 14, 1, 54, 25, 804417), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:54:25,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:54:25,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:54:25,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:35 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:54:26,386 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:54:26,839 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:54:27,178 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:54:27,530 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:54:27,884 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:54:28,031 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:54:28,303 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:54:28,524 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:54:28,659 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} gunicorn-web stdout | 2025-02-14 01:54:29,007 [245] [DEBUG] [app] Starting request: urn:request:f5f03beb-2630-420e-9065-0476038b1d66 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:54:29,008 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:29,008 [242] [DEBUG] [app] Starting request: urn:request:4791ec3a-a003-4c7b-95f9-d6ffbaf42a73 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:54:29,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:29,011 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:29,012 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:54:29,012 [246] [DEBUG] [app] Starting request: urn:request:577f5580-e8e8-4f04-b628-bf92ef3bcb37 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:54:29,013 [246] [DEBUG] [app] Ending request: urn:request:577f5580-e8e8-4f04-b628-bf92ef3bcb37 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:577f5580-e8e8-4f04-b628-bf92ef3bcb37', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:54:29,013 [251] [DEBUG] [app] Starting request: urn:request:2909125c-8531-4d7f-a353-86df8686a6a3 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:54:29,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:54:29,013 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:54:29,013 [251] [DEBUG] [app] Ending request: urn:request:2909125c-8531-4d7f-a353-86df8686a6a3 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:2909125c-8531-4d7f-a353-86df8686a6a3', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:54:29,014 [251] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:54:29,014 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-web stdout | 2025-02-14 01:54:29,014 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:29,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:29,017 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:29,017 [242] [DEBUG] [app] Starting request: urn:request:46bc4de2-cf09-49d2-aaf8-117900121a36 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:54:29,018 [242] [DEBUG] [app] Ending request: urn:request:46bc4de2-cf09-49d2-aaf8-117900121a36 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:46bc4de2-cf09-49d2-aaf8-117900121a36', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:54:29,018 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:54:29,018 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:54:29,018 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:29,018 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:54:29,018 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:54:29,019 [244] [DEBUG] [app] Starting request: urn:request:4bcfaae7-e745-4daa-856b-7e5346141c2e (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:54:29,020 [244] [DEBUG] [app] Ending request: urn:request:4bcfaae7-e745-4daa-856b-7e5346141c2e (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:4bcfaae7-e745-4daa-856b-7e5346141c2e', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:54:29,020 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:54:29,020 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:54:29,020 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:54:29,020 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:54:29,024 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:54:29,024 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:54:29,026 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:54:29,026 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:54:29,033 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:54:29,034 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:54:29,035 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:54:29,037 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:54:29,038 [242] [DEBUG] [app] Ending request: urn:request:4791ec3a-a003-4c7b-95f9-d6ffbaf42a73 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:4791ec3a-a003-4c7b-95f9-d6ffbaf42a73', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:54:29,038 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:54:29,038 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:54:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:54:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.031) gunicorn-web stdout | 2025-02-14 01:54:29,039 [245] [DEBUG] [app] Ending request: urn:request:f5f03beb-2630-420e-9065-0476038b1d66 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:f5f03beb-2630-420e-9065-0476038b1d66', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:54:29,040 [245] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:54:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.034 47 0.034) gunicorn-web stdout | 2025-02-14 01:54:29,040 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:54:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" repositorygcworker stdout | 2025-02-14 01:54:29,115 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} securityworker stdout | 2025-02-14 01:54:29,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:54:29,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:54.231161+00:00 (in 24.998406 seconds) securityworker stdout | 2025-02-14 01:54:29,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:59 UTC)" (scheduled at 2025-02-14 01:54:29.232325+00:00) securityworker stdout | 2025-02-14 01:54:29,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:54:29,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:54:29,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:54:29,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:54:29,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:54:29,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:54:29,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:54:29,244 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:54:29,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:29,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:29,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:54:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:54:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:54:29,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:54:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:54:29,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:54:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:54:29,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:54:29,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:54:29,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:54:29,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 49, 29, 236469), 1, 2]) securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:29,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:29,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:54:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:54:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:54:29,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:54:29,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:54:29,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:54:29,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:54:29,251 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:54:29,251 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:54:29,251 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:54:29,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 49, 29, 236469), 1, 2]) securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:29,254 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:29,254 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:54:29,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:54:29,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:54:29,254 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:54:29,254 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:54:29,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:54:29,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:54:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:54:29,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:54:59 UTC)" executed successfully gcworker stdout | 2025-02-14 01:54:29,831 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} exportactionlogsworker stdout | 2025-02-14 01:54:30,212 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:54:30,213 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:39.215004+00:00 (in 9.001922 seconds) exportactionlogsworker stdout | 2025-02-14 01:54:30,213 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:30 UTC)" (scheduled at 2025-02-14 01:54:30.212654+00:00) exportactionlogsworker stdout | 2025-02-14 01:54:30,213 [63] [DEBUG] [workers.queueworker] Getting work item from queue. exportactionlogsworker stdout | 2025-02-14 01:54:30,214 [63] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 54, 30, 213374), True, datetime.datetime(2025, 2, 14, 1, 54, 30, 213374), 0, 'exportactionlogs/%', 50, 1, 0]) exportactionlogsworker stdout | 2025-02-14 01:54:30,223 [63] [DEBUG] [workers.queueworker] No more work. exportactionlogsworker stdout | 2025-02-14 01:54:30,223 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:54:30,223 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:30 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:54:31,331 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:54:31,333 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:54:31,336 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:54:31,339 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:54:31,342 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:54:31,568 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:54:32,434 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:54:32,770 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} repositorygcworker stdout | 2025-02-14 01:54:33,011 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:54:33,012 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:36.014770+00:00 (in 3.002648 seconds) repositorygcworker stdout | 2025-02-14 01:54:33,012 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:33 UTC)" (scheduled at 2025-02-14 01:54:33.011632+00:00) repositorygcworker stdout | 2025-02-14 01:54:33,012 [85] [DEBUG] [workers.queueworker] Getting work item from queue. repositorygcworker stdout | 2025-02-14 01:54:33,013 [85] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 54, 33, 12437), True, datetime.datetime(2025, 2, 14, 1, 54, 33, 12437), 0, 'repositorygc/%', 50, 1, 0]) repositorygcworker stdout | 2025-02-14 01:54:33,022 [85] [DEBUG] [workers.queueworker] No more work. repositorygcworker stdout | 2025-02-14 01:54:33,023 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:54:33,023 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:33 UTC)" executed successfully gunicorn-secscan stdout | 2025-02-14 01:54:33,245 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:54:33,249 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:54:33,251 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:54:34,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:54:34,001 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:04.000511+00:00 (in 29.999495 seconds) buildlogsarchiver stdout | 2025-02-14 01:54:34,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:04 UTC)" (scheduled at 2025-02-14 01:54:34.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:54:34,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 54, 34, 1309), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:54:34,010 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:54:34,010 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:54:34,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:04 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:54:34,544 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:54:34,547 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:54:34,549 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:54:34,555 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:54:34,559 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:54:34,562 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:54:34,565 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:54:34,600 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:54:34,610 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:54:35,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:54:35,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:41.806837+00:00 (in 6.002680 seconds) notificationworker stdout | 2025-02-14 01:54:35,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:45 UTC)" (scheduled at 2025-02-14 01:54:35.803718+00:00) notificationworker stdout | 2025-02-14 01:54:35,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:54:35,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 54, 35, 804499), True, datetime.datetime(2025, 2, 14, 1, 54, 35, 804499), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:54:35,815 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:54:35,815 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:54:35,815 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:45 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:54:36,014 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:54:36,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:43.014615+00:00 (in 6.999424 seconds) repositorygcworker stdout | 2025-02-14 01:54:36,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:36 UTC)" (scheduled at 2025-02-14 01:54:36.014770+00:00) repositorygcworker stdout | 2025-02-14 01:54:36,015 [85] [DEBUG] [workers.queueworker] Running watchdog. repositorygcworker stdout | 2025-02-14 01:54:36,015 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:36 UTC)" executed successfully exportactionlogsworker stdout | 2025-02-14 01:54:39,215 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:54:39,215 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:25.215238+00:00 (in 45.999797 seconds) exportactionlogsworker stdout | 2025-02-14 01:54:39,215 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:59:39 UTC)" (scheduled at 2025-02-14 01:54:39.215004+00:00) exportactionlogsworker stdout | 2025-02-14 01:54:39,216 [63] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [False, datetime.datetime(2025, 2, 14, 1, 54, 39, 215741), 'exportactionlogs/%']) exportactionlogsworker stdout | 2025-02-14 01:54:39,225 [63] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 54, 39, 215741), True, datetime.datetime(2025, 2, 14, 1, 54, 39, 215741), 0, 'exportactionlogs/%']) exportactionlogsworker stdout | 2025-02-14 01:54:39,228 [63] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) AND NOT ("t1"."queue_name" IN (SELECT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s)))))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 54, 39, 215741), True, datetime.datetime(2025, 2, 14, 1, 54, 39, 215741), 0, 'exportactionlogs/%', False, datetime.datetime(2025, 2, 14, 1, 54, 39, 215741), 'exportactionlogs/%']) exportactionlogsworker stdout | 2025-02-14 01:54:39,230 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:54:39,230 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:59:39 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:54:41,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:54:41,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:45.803718+00:00 (in 3.996445 seconds) notificationworker stdout | 2025-02-14 01:54:41,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:59:41 UTC)" (scheduled at 2025-02-14 01:54:41.806837+00:00) notificationworker stdout | 2025-02-14 01:54:41,807 [75] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [False, datetime.datetime(2025, 2, 14, 1, 54, 41, 807538), 'notification/%']) notificationworker stdout | 2025-02-14 01:54:41,817 [75] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 54, 41, 807538), True, datetime.datetime(2025, 2, 14, 1, 54, 41, 807538), 0, 'notification/%']) notificationworker stdout | 2025-02-14 01:54:41,820 [75] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) AND NOT ("t1"."queue_name" IN (SELECT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s)))))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 54, 41, 807538), True, datetime.datetime(2025, 2, 14, 1, 54, 41, 807538), 0, 'notification/%', False, datetime.datetime(2025, 2, 14, 1, 54, 41, 807538), 'notification/%']) notificationworker stdout | 2025-02-14 01:54:41,822 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:54:41,822 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:59:41 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:54:43,014 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:54:43,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:33.011632+00:00 (in 49.996545 seconds) repositorygcworker stdout | 2025-02-14 01:54:43,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:59:43 UTC)" (scheduled at 2025-02-14 01:54:43.014615+00:00) repositorygcworker stdout | 2025-02-14 01:54:43,015 [85] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [False, datetime.datetime(2025, 2, 14, 1, 54, 43, 15360), 'repositorygc/%']) repositorygcworker stdout | 2025-02-14 01:54:43,024 [85] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 54, 43, 15360), True, datetime.datetime(2025, 2, 14, 1, 54, 43, 15360), 0, 'repositorygc/%']) repositorygcworker stdout | 2025-02-14 01:54:43,027 [85] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) AND NOT ("t1"."queue_name" IN (SELECT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s)))))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 54, 43, 15360), True, datetime.datetime(2025, 2, 14, 1, 54, 43, 15360), 0, 'repositorygc/%', False, datetime.datetime(2025, 2, 14, 1, 54, 43, 15360), 'repositorygc/%']) repositorygcworker stdout | 2025-02-14 01:54:43,030 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:54:43,030 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 01:59:43 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:54:44,008 [242] [DEBUG] [app] Starting request: urn:request:9b848d65-b74f-4e87-a2d8-82f769c84dc2 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:54:44,009 [244] [DEBUG] [app] Starting request: urn:request:239c7fb7-d19d-4ee3-87a9-80e0d93e6390 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:54:44,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:44,010 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:44,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:44,012 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:54:44,012 [246] [DEBUG] [app] Starting request: urn:request:12bff081-de9c-49cf-bae3-f9d65a172a4c (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:54:44,013 [246] [DEBUG] [app] Ending request: urn:request:12bff081-de9c-49cf-bae3-f9d65a172a4c (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:12bff081-de9c-49cf-bae3-f9d65a172a4c', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-registry stdout | 2025-02-14 01:54:44,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:54:44,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:54:44,013 [253] [DEBUG] [app] Starting request: urn:request:5ca59d19-5736-4973-8ff3-e399ea37e672 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:54:44,014 [253] [DEBUG] [app] Ending request: urn:request:5ca59d19-5736-4973-8ff3-e399ea37e672 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:5ca59d19-5736-4973-8ff3-e399ea37e672', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:54:44,014 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:54:44,014 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:54:44,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:44,016 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:44,017 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:44,018 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:44,018 [242] [DEBUG] [app] Starting request: urn:request:64d81a56-6e6b-49ea-81a2-3bc5552dfc28 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:54:44,018 [242] [DEBUG] [app] Ending request: urn:request:64d81a56-6e6b-49ea-81a2-3bc5552dfc28 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:64d81a56-6e6b-49ea-81a2-3bc5552dfc28', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:54:44,019 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:54:44,019 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:54:44,019 [243] [DEBUG] [app] Starting request: urn:request:dfc3ba96-f2c5-40aa-9e17-dbc249b68233 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:54:44,019 [243] [DEBUG] [app] Ending request: urn:request:dfc3ba96-f2c5-40aa-9e17-dbc249b68233 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:dfc3ba96-f2c5-40aa-9e17-dbc249b68233', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:54:44,019 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:54:44,019 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:54:44,020 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:54:44,020 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:54:44,020 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:54:44,020 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:54:44,025 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:54:44,025 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:54:44,026 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:54:44,026 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:54:44,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:54:44,033 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:54:44,034 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:54:44,035 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:54:44,036 [242] [DEBUG] [app] Ending request: urn:request:9b848d65-b74f-4e87-a2d8-82f769c84dc2 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:9b848d65-b74f-4e87-a2d8-82f769c84dc2', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:54:44,036 [242] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:54:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.029 47 0.030) gunicorn-web stdout | 2025-02-14 01:54:44,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:54:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:54:44,038 [244] [DEBUG] [app] Ending request: urn:request:239c7fb7-d19d-4ee3-87a9-80e0d93e6390 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:239c7fb7-d19d-4ee3-87a9-80e0d93e6390', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:54:44,038 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:54:44,038 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:54:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:54:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) exportactionlogsworker stdout | 2025-02-14 01:54:44,837 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:54:44,921 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} namespacegcworker stdout | 2025-02-14 01:54:45,503 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:54:45,504 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:12.505687+00:00 (in 27.001482 seconds) namespacegcworker stdout | 2025-02-14 01:54:45,504 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:45 UTC)" (scheduled at 2025-02-14 01:54:45.503718+00:00) namespacegcworker stdout | 2025-02-14 01:54:45,504 [73] [DEBUG] [workers.queueworker] Getting work item from queue. namespacegcworker stdout | 2025-02-14 01:54:45,505 [73] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 54, 45, 504541), True, datetime.datetime(2025, 2, 14, 1, 54, 45, 504541), 0, 'namespacegc/%', 50, 1, 0]) namespacegcworker stdout | 2025-02-14 01:54:45,514 [73] [DEBUG] [workers.queueworker] No more work. namespacegcworker stdout | 2025-02-14 01:54:45,514 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:54:45,514 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:45 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:54:45,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:54:45,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:55.803718+00:00 (in 9.999575 seconds) notificationworker stdout | 2025-02-14 01:54:45,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:55 UTC)" (scheduled at 2025-02-14 01:54:45.803718+00:00) notificationworker stdout | 2025-02-14 01:54:45,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:54:45,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 54, 45, 804478), True, datetime.datetime(2025, 2, 14, 1, 54, 45, 804478), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:54:45,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:54:45,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:54:45,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:54:55 UTC)" executed successfully quotaregistrysizeworker stdout | 2025-02-14 01:54:46,009 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:54:46,010 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:46.009738+00:00 (in 59.999581 seconds) quotaregistrysizeworker stdout | 2025-02-14 01:54:46,010 [78] [INFO] [apscheduler.executors.default] Running job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:46 UTC)" (scheduled at 2025-02-14 01:54:46.009738+00:00) quotaregistrysizeworker stdout | 2025-02-14 01:54:46,010 [78] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."size_bytes", "t1"."running", "t1"."queued", "t1"."completed_ms" FROM "quotaregistrysize" AS "t1" LIMIT %s OFFSET %s', [1, 0]) quotaregistrysizeworker stdout | 2025-02-14 01:54:46,018 [78] [DEBUG] [data.database] Disconnecting from database. quotaregistrysizeworker stdout | 2025-02-14 01:54:46,018 [78] [INFO] [apscheduler.executors.default] Job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:46 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:54:46,996 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:54:48,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:54:48,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:59.123196+00:00 (in 10.997582 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:54:48,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:48 UTC)" (scheduled at 2025-02-14 01:54:48.125163+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:54:48,125 [87] [DEBUG] [workers.queueworker] Running watchdog. securityscanningnotificationworker stdout | 2025-02-14 01:54:48,125 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:48 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:54:50,376 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:54:50,766 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:54:52,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:54:52,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:22.310342+00:00 (in 29.999558 seconds) autopruneworker stdout | 2025-02-14 01:54:52,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:22 UTC)" (scheduled at 2025-02-14 01:54:52.310342+00:00) autopruneworker stdout | 2025-02-14 01:54:52,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494492316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:54:52,321 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:54:52,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:54:52,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:22 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:54:52,506 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} manifestsubjectbackfillworker stdout | 2025-02-14 01:54:52,900 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:54:52,901 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:05.898886+00:00 (in 12.997862 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:54:52,901 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:52 UTC)" (scheduled at 2025-02-14 01:54:52.900596+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:54:52,901 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."artifact_type_backfilled" = %s) OR ("t1"."artifact_type_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:54:52,910 [71] [DEBUG] [__main__] Manifest artifact_type backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:54:52,910 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:54:52,910 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:54:53,431 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:54:54,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:54:54,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:59.232325+00:00 (in 5.000714 seconds) securityworker stdout | 2025-02-14 01:54:54,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:24 UTC)" (scheduled at 2025-02-14 01:54:54.231161+00:00) securityworker stdout | 2025-02-14 01:54:54,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:54:54,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:54:54,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:54:54,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:54:54,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:54:54,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:54:54,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:54:54,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:54:54,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:54:55.392556+00:00 (in 1.001743 seconds) gcworker stdout | 2025-02-14 01:54:54,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:24 UTC)" (scheduled at 2025-02-14 01:54:54.390410+00:00) gcworker stdout | 2025-02-14 01:54:54,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:54:54,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:24 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:54:55,242 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} gcworker stdout | 2025-02-14 01:54:55,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:54:55,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:24.390410+00:00 (in 28.997435 seconds) gcworker stdout | 2025-02-14 01:54:55,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:25 UTC)" (scheduled at 2025-02-14 01:54:55.392556+00:00) gcworker stdout | 2025-02-14 01:54:55,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:54:55,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497795401, None, 1, 0]) gcworker stdout | 2025-02-14 01:54:55,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:54:55,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:54:55,652 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:54:55,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:54:55,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:05.803718+00:00 (in 9.999576 seconds) notificationworker stdout | 2025-02-14 01:54:55,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:05 UTC)" (scheduled at 2025-02-14 01:54:55.803718+00:00) notificationworker stdout | 2025-02-14 01:54:55,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:54:55,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 54, 55, 804408), True, datetime.datetime(2025, 2, 14, 1, 54, 55, 804408), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:54:55,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:54:55,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:54:55,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:05 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:54:56,422 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:54:56,858 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:54:57,214 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:54:57,540 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:54:57,912 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:54:58,067 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:54:58,314 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:54:58,537 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:54:58,695 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} gunicorn-web stdout | 2025-02-14 01:54:59,007 [245] [DEBUG] [app] Starting request: urn:request:86355091-6358-4f28-aca2-b75b3ea51be9 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:54:59,008 [244] [DEBUG] [app] Starting request: urn:request:b54c9785-fe22-422a-881a-d81925bef5b5 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:54:59,009 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:59,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:59,011 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:59,012 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:54:59,012 [246] [DEBUG] [app] Starting request: urn:request:d0cd1999-d433-4f46-b3e3-0c62d4feb94d (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:54:59,013 [246] [DEBUG] [app] Ending request: urn:request:d0cd1999-d433-4f46-b3e3-0c62d4feb94d (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:d0cd1999-d433-4f46-b3e3-0c62d4feb94d', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:54:59,013 [253] [DEBUG] [app] Starting request: urn:request:804e685a-14dd-4d5b-a800-26141f867be5 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-registry stdout | 2025-02-14 01:54:59,013 [253] [DEBUG] [app] Ending request: urn:request:804e685a-14dd-4d5b-a800-26141f867be5 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:804e685a-14dd-4d5b-a800-26141f867be5', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:54:59,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:54:59,013 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:54:59,014 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-web stdout | 2025-02-14 01:54:59,014 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:54:59,015 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:59,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:54:59,017 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:59,017 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:54:59,017 [244] [DEBUG] [app] Starting request: urn:request:a14f8889-a4f1-4213-b8c4-296c18b17a36 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:54:59,018 [244] [DEBUG] [app] Ending request: urn:request:a14f8889-a4f1-4213-b8c4-296c18b17a36 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:a14f8889-a4f1-4213-b8c4-296c18b17a36', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:54:59,018 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:54:59,018 [242] [DEBUG] [app] Starting request: urn:request:124042b4-2743-488e-ac59-4f2bcec17267 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:54:59,018 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:54:59,018 [242] [DEBUG] [app] Ending request: urn:request:124042b4-2743-488e-ac59-4f2bcec17267 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:124042b4-2743-488e-ac59-4f2bcec17267', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:54:59,018 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:54:59,018 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:54:59,018 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:54:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:54:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:54:59,019 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:54:59,019 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:54:59,019 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:54:59,024 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:54:59,024 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:54:59,024 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:54:59,024 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:54:59,031 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:54:59,031 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:54:59,034 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:54:59,034 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:54:59,036 [244] [DEBUG] [app] Ending request: urn:request:b54c9785-fe22-422a-881a-d81925bef5b5 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:b54c9785-fe22-422a-881a-d81925bef5b5', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:54:59,036 [245] [DEBUG] [app] Ending request: urn:request:86355091-6358-4f28-aca2-b75b3ea51be9 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:86355091-6358-4f28-aca2-b75b3ea51be9', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:54:59,036 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:54:59,036 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:54:59,036 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:54:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:54:59,036 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:54:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:54:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.029 47 0.030) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:54:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) securityscanningnotificationworker stdout | 2025-02-14 01:54:59,123 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:54:59,123 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:20.124914+00:00 (in 21.001304 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:54:59,123 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:59 UTC)" (scheduled at 2025-02-14 01:54:59.123196+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:54:59,123 [87] [DEBUG] [workers.queueworker] Getting work item from queue. securityscanningnotificationworker stdout | 2025-02-14 01:54:59,124 [87] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 54, 59, 123909), True, datetime.datetime(2025, 2, 14, 1, 54, 59, 123909), 0, 'secscanv4/%', 50, 1, 0]) securityscanningnotificationworker stdout | 2025-02-14 01:54:59,133 [87] [DEBUG] [workers.queueworker] No more work. securityscanningnotificationworker stdout | 2025-02-14 01:54:59,133 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:54:59,133 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:55:59 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:54:59,139 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} securityworker stdout | 2025-02-14 01:54:59,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:54:59,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:24.231161+00:00 (in 24.998415 seconds) securityworker stdout | 2025-02-14 01:54:59,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:29 UTC)" (scheduled at 2025-02-14 01:54:59.232325+00:00) securityworker stdout | 2025-02-14 01:54:59,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:54:59,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:54:59,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:54:59,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:54:59,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:54:59,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:54:59,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:54:59,244 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:54:59,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:54:59,248 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 49, 59, 236338), 1, 2]) securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:59,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:59,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:54:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:54:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:54:59,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:54:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:54:59,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:54:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:54:59,251 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:54:59,251 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:54:59,251 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:54:59,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 49, 59, 236338), 1, 2]) securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:59,254 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:54:59,254 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:54:59,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:54:59,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:54:59,254 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:54:59,254 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:54:59,254 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:54:59,254 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:54:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:54:59,254 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:29 UTC)" executed successfully gcworker stdout | 2025-02-14 01:54:59,867 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} gunicorn-web stdout | 2025-02-14 01:55:01,339 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:55:01,342 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:55:01,346 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:55:01,348 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:55:01,351 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:55:01,604 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:55:02,471 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:55:02,804 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} gunicorn-secscan stdout | 2025-02-14 01:55:03,252 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:55:03,256 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:55:03,258 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:55:04,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:55:04,001 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:34.000511+00:00 (in 29.999481 seconds) buildlogsarchiver stdout | 2025-02-14 01:55:04,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:34 UTC)" (scheduled at 2025-02-14 01:55:04.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:55:04,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 55, 4, 1337), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:55:04,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:55:04,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:55:04,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:34 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:55:04,553 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:55:04,557 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:55:04,559 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:55:04,564 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:55:04,566 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:55:04,572 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:55:04,574 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:55:04,608 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:55:04,617 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:55:05,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:55:05,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:07.807092+00:00 (in 2.002959 seconds) notificationworker stdout | 2025-02-14 01:55:05,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:15 UTC)" (scheduled at 2025-02-14 01:55:05.803718+00:00) notificationworker stdout | 2025-02-14 01:55:05,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:55:05,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 55, 5, 804416), True, datetime.datetime(2025, 2, 14, 1, 55, 5, 804416), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:55:05,815 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:55:05,815 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:55:05,815 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:15 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:55:05,899 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:55:05,899 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:52.900596+00:00 (in 47.001254 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:55:05,899 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:05 UTC)" (scheduled at 2025-02-14 01:55:05.898886+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:55:05,900 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."subject_backfilled" = %s) OR ("t1"."subject_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:55:05,908 [71] [DEBUG] [__main__] Manifest subject backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:55:05,908 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:55:05,908 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:05 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:55:07,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:55:07,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:15.803718+00:00 (in 7.996192 seconds) notificationworker stdout | 2025-02-14 01:55:07,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:07 UTC)" (scheduled at 2025-02-14 01:55:07.807092+00:00) notificationworker stdout | 2025-02-14 01:55:07,807 [75] [DEBUG] [workers.queueworker] Running watchdog. notificationworker stdout | 2025-02-14 01:55:07,807 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:07 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:55:12,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:55:12,506 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:45.503718+00:00 (in 32.997587 seconds) namespacegcworker stdout | 2025-02-14 01:55:12,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:12 UTC)" (scheduled at 2025-02-14 01:55:12.505687+00:00) namespacegcworker stdout | 2025-02-14 01:55:12,506 [73] [DEBUG] [workers.queueworker] Running watchdog. namespacegcworker stdout | 2025-02-14 01:55:12,506 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:12 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:55:14,007 [245] [DEBUG] [app] Starting request: urn:request:4e38c7e0-4fe9-4823-b0a6-c7a9e2bf147b (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:55:14,008 [242] [DEBUG] [app] Starting request: urn:request:b7ec228c-ddba-4870-aa23-6c95fa78a73e (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:55:14,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:14,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:14,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:55:14,012 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:55:14,012 [253] [DEBUG] [app] Starting request: urn:request:befc7f09-937f-49ab-98fc-9d59313490f0 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:55:14,013 [253] [DEBUG] [app] Ending request: urn:request:befc7f09-937f-49ab-98fc-9d59313490f0 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:befc7f09-937f-49ab-98fc-9d59313490f0', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:55:14,013 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:55:14,013 [250] [DEBUG] [app] Starting request: urn:request:14ee01b9-3677-4628-80c9-c92a5c4cc7a0 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:55:14,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:55:14,014 [250] [DEBUG] [app] Ending request: urn:request:14ee01b9-3677-4628-80c9-c92a5c4cc7a0 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:14ee01b9-3677-4628-80c9-c92a5c4cc7a0', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:55:14,014 [250] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-web stdout | 2025-02-14 01:55:14,014 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:55:14,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:14,016 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:14,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:55:14,017 [244] [DEBUG] [app] Starting request: urn:request:9e90f8aa-7d48-49a1-b3a0-9dc8939a7176 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:55:14,018 [244] [DEBUG] [app] Ending request: urn:request:9e90f8aa-7d48-49a1-b3a0-9dc8939a7176 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:9e90f8aa-7d48-49a1-b3a0-9dc8939a7176', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:55:14,018 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:55:14,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:55:14,018 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:55:14,018 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:55:14,018 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:55:14,019 [244] [DEBUG] [app] Starting request: urn:request:516519e1-0364-4fe3-969f-1da297e3242b (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:55:14,019 [244] [DEBUG] [app] Ending request: urn:request:516519e1-0364-4fe3-969f-1da297e3242b (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:516519e1-0364-4fe3-969f-1da297e3242b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:55:14,019 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:55:14,019 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:55:14,020 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:55:14,020 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:55:14,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:55:14,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:55:14,025 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:55:14,025 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:55:14,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:55:14,032 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:55:14,033 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:55:14,034 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:55:14,036 [242] [DEBUG] [app] Ending request: urn:request:b7ec228c-ddba-4870-aa23-6c95fa78a73e (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:b7ec228c-ddba-4870-aa23-6c95fa78a73e', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:55:14,036 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:55:14,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:55:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:55:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.029 47 0.030) gunicorn-web stdout | 2025-02-14 01:55:14,037 [245] [DEBUG] [app] Ending request: urn:request:4e38c7e0-4fe9-4823-b0a6-c7a9e2bf147b (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:4e38c7e0-4fe9-4823-b0a6-c7a9e2bf147b', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:55:14,037 [245] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:55:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.031) gunicorn-web stdout | 2025-02-14 01:55:14,037 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:55:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" exportactionlogsworker stdout | 2025-02-14 01:55:14,873 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:55:14,958 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} notificationworker stdout | 2025-02-14 01:55:15,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:55:15,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:25.803718+00:00 (in 9.999576 seconds) notificationworker stdout | 2025-02-14 01:55:15,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:25 UTC)" (scheduled at 2025-02-14 01:55:15.803718+00:00) notificationworker stdout | 2025-02-14 01:55:15,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:55:15,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 55, 15, 804474), True, datetime.datetime(2025, 2, 14, 1, 55, 15, 804474), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:55:15,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:55:15,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:55:15,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:25 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:55:17,033 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:55:20,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:55:20,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:48.125163+00:00 (in 27.999816 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:55:20,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 02:00:20 UTC)" (scheduled at 2025-02-14 01:55:20.124914+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:55:20,126 [87] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [False, datetime.datetime(2025, 2, 14, 1, 55, 20, 125628), 'secscanv4/%']) securityscanningnotificationworker stdout | 2025-02-14 01:55:20,135 [87] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 55, 20, 125628), True, datetime.datetime(2025, 2, 14, 1, 55, 20, 125628), 0, 'secscanv4/%']) securityscanningnotificationworker stdout | 2025-02-14 01:55:20,138 [87] [DEBUG] [peewee] ('SELECT COUNT(1) FROM (SELECT DISTINCT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) AND NOT ("t1"."queue_name" IN (SELECT "t1"."queue_name" FROM "queueitem" AS "t1" WHERE ((("t1"."available" = %s) AND ("t1"."processing_expires" > %s)) AND ("t1"."queue_name" ILIKE %s)))))) AS "_wrapped"', [datetime.datetime(2025, 2, 14, 1, 55, 20, 125628), True, datetime.datetime(2025, 2, 14, 1, 55, 20, 125628), 0, 'secscanv4/%', False, datetime.datetime(2025, 2, 14, 1, 55, 20, 125628), 'secscanv4/%']) securityscanningnotificationworker stdout | 2025-02-14 01:55:20,141 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:55:20,141 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.update_queue_metrics (trigger: interval[0:05:00], next run at: 2025-02-14 02:00:20 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:55:20,407 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:55:20,795 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:55:22,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:55:22,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:52.310342+00:00 (in 29.999587 seconds) autopruneworker stdout | 2025-02-14 01:55:22,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:52 UTC)" (scheduled at 2025-02-14 01:55:22.310342+00:00) autopruneworker stdout | 2025-02-14 01:55:22,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494522316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:55:22,321 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:55:22,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:55:22,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:52 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:55:22,513 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} buildlogsarchiver stdout | 2025-02-14 01:55:23,453 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:55:24,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:55:24,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:29.232325+00:00 (in 5.000691 seconds) securityworker stdout | 2025-02-14 01:55:24,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:54 UTC)" (scheduled at 2025-02-14 01:55:24.231161+00:00) securityworker stdout | 2025-02-14 01:55:24,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:55:24,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:55:24,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:55:24,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:55:24,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:55:24,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:55:24,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:54 UTC)" executed successfully gcworker stdout | 2025-02-14 01:55:24,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:55:24,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:25.392556+00:00 (in 1.001711 seconds) gcworker stdout | 2025-02-14 01:55:24,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:54 UTC)" (scheduled at 2025-02-14 01:55:24.390410+00:00) gcworker stdout | 2025-02-14 01:55:24,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:55:24,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:54 UTC)" executed successfully exportactionlogsworker stdout | 2025-02-14 01:55:25,215 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:55:25,215 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:30.212654+00:00 (in 4.996931 seconds) exportactionlogsworker stdout | 2025-02-14 01:55:25,215 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:25 UTC)" (scheduled at 2025-02-14 01:55:25.215238+00:00) exportactionlogsworker stdout | 2025-02-14 01:55:25,216 [63] [DEBUG] [workers.queueworker] Running watchdog. exportactionlogsworker stdout | 2025-02-14 01:55:25,216 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:25 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:55:25,278 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} gcworker stdout | 2025-02-14 01:55:25,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:55:25,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:54.390410+00:00 (in 28.997445 seconds) gcworker stdout | 2025-02-14 01:55:25,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:55 UTC)" (scheduled at 2025-02-14 01:55:25.392556+00:00) gcworker stdout | 2025-02-14 01:55:25,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:55:25,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497825401, None, 1, 0]) gcworker stdout | 2025-02-14 01:55:25,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:55:25,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:55 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:55:25,688 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:55:25,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:55:25,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:35.803718+00:00 (in 9.999607 seconds) notificationworker stdout | 2025-02-14 01:55:25,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:35 UTC)" (scheduled at 2025-02-14 01:55:25.803718+00:00) notificationworker stdout | 2025-02-14 01:55:25,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:55:25,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 55, 25, 804430), True, datetime.datetime(2025, 2, 14, 1, 55, 25, 804430), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:55:25,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:55:25,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:55:25,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:35 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:55:26,458 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:55:26,895 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:55:27,237 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:55:27,558 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:55:27,948 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:55:28,092 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:55:28,330 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:55:28,557 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:55:28,731 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} gunicorn-web stdout | 2025-02-14 01:55:29,006 [242] [DEBUG] [app] Starting request: urn:request:b30ba8b8-4771-46e5-ad58-1555fc3bb283 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:55:29,007 [244] [DEBUG] [app] Starting request: urn:request:bec41212-87e3-4458-80f0-6fd1ce123ee1 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:55:29,008 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:29,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:29,010 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:55:29,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:55:29,012 [253] [DEBUG] [app] Starting request: urn:request:34f18a6f-aa9e-4dd3-8c17-4b3c39c03e50 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:55:29,012 [253] [DEBUG] [app] Ending request: urn:request:34f18a6f-aa9e-4dd3-8c17-4b3c39c03e50 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:34f18a6f-aa9e-4dd3-8c17-4b3c39c03e50', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:55:29,012 [246] [DEBUG] [app] Starting request: urn:request:3c6ff47f-daf9-41db-abe0-1fde17a92dc6 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:55:29,013 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:55:29,013 [246] [DEBUG] [app] Ending request: urn:request:3c6ff47f-daf9-41db-abe0-1fde17a92dc6 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:3c6ff47f-daf9-41db-abe0-1fde17a92dc6', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:55:29,013 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:55:29,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:55:29,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-web stdout | 2025-02-14 01:55:29,014 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:29,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:29,016 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:55:29,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:55:29,017 [245] [DEBUG] [app] Starting request: urn:request:5e410074-7d82-48fe-8a2b-3440e1a7672c (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:55:29,017 [244] [DEBUG] [app] Starting request: urn:request:ea845746-c0e7-4c9f-b701-3ec4afe7b1f7 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:55:29,017 [245] [DEBUG] [app] Ending request: urn:request:5e410074-7d82-48fe-8a2b-3440e1a7672c (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:5e410074-7d82-48fe-8a2b-3440e1a7672c', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:55:29,017 [244] [DEBUG] [app] Ending request: urn:request:ea845746-c0e7-4c9f-b701-3ec4afe7b1f7 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:ea845746-c0e7-4c9f-b701-3ec4afe7b1f7', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:55:29,018 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:55:29,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:55:29,018 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:55:29,018 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:55:29,018 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:55:29,018 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:55:29,018 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:55:29,018 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:55:29,024 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:55:29,024 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:55:29,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:55:29,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:55:29,030 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:55:29,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:55:29,033 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:55:29,033 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:55:29,035 [244] [DEBUG] [app] Ending request: urn:request:bec41212-87e3-4458-80f0-6fd1ce123ee1 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:bec41212-87e3-4458-80f0-6fd1ce123ee1', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:55:29,035 [244] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:55:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:55:29,036 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:55:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:55:29,036 [242] [DEBUG] [app] Ending request: urn:request:b30ba8b8-4771-46e5-ad58-1555fc3bb283 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:b30ba8b8-4771-46e5-ad58-1555fc3bb283', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:55:29,036 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:55:29,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:55:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:55:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.031) repositorygcworker stdout | 2025-02-14 01:55:29,175 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} securityworker stdout | 2025-02-14 01:55:29,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:55:29,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:54.231161+00:00 (in 24.998368 seconds) securityworker stdout | 2025-02-14 01:55:29,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:59 UTC)" (scheduled at 2025-02-14 01:55:29.232325+00:00) securityworker stdout | 2025-02-14 01:55:29,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:55:29,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:55:29,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:55:29,237 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:55:29,245 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:55:29,245 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:55:29,245 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:55:29,245 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:55:29,246 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:29,249 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:29,249 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:55:29,249 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:55:29,249 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:55:29,249 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:55:29,249 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:55:29,249 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:55:29,249 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:55:29,249 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:55:29,249 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:55:29,249 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:55:29,250 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 50, 29, 236862), 1, 2]) securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:29,253 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:29,253 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:55:29,253 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:55:29,253 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:55:29,253 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:55:29,253 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:55:29,253 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:55:29,253 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:55:29,253 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:55:29,253 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:55:29,253 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:55:29,254 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 50, 29, 236862), 1, 2]) securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:29,257 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:29,257 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:55:29,257 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:55:29,257 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:55:29,257 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:55:29,257 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:55:29,257 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:55:29,257 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:55:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:55:29,257 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:55:59 UTC)" executed successfully gcworker stdout | 2025-02-14 01:55:29,903 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} exportactionlogsworker stdout | 2025-02-14 01:55:30,212 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:55:30,213 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:25.215238+00:00 (in 55.002159 seconds) exportactionlogsworker stdout | 2025-02-14 01:55:30,213 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:30 UTC)" (scheduled at 2025-02-14 01:55:30.212654+00:00) exportactionlogsworker stdout | 2025-02-14 01:55:30,213 [63] [DEBUG] [workers.queueworker] Getting work item from queue. exportactionlogsworker stdout | 2025-02-14 01:55:30,214 [63] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 55, 30, 213405), True, datetime.datetime(2025, 2, 14, 1, 55, 30, 213405), 0, 'exportactionlogs/%', 50, 1, 0]) exportactionlogsworker stdout | 2025-02-14 01:55:30,223 [63] [DEBUG] [workers.queueworker] No more work. exportactionlogsworker stdout | 2025-02-14 01:55:30,223 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:55:30,223 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:30 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:55:31,347 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:55:31,350 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:55:31,353 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:55:31,356 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:55:31,358 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:55:31,632 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:55:32,503 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:55:32,814 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} repositorygcworker stdout | 2025-02-14 01:55:33,011 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:55:33,012 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:36.014770+00:00 (in 3.002669 seconds) repositorygcworker stdout | 2025-02-14 01:55:33,012 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:33 UTC)" (scheduled at 2025-02-14 01:55:33.011632+00:00) repositorygcworker stdout | 2025-02-14 01:55:33,012 [85] [DEBUG] [workers.queueworker] Getting work item from queue. repositorygcworker stdout | 2025-02-14 01:55:33,013 [85] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 55, 33, 12376), True, datetime.datetime(2025, 2, 14, 1, 55, 33, 12376), 0, 'repositorygc/%', 50, 1, 0]) repositorygcworker stdout | 2025-02-14 01:55:33,022 [85] [DEBUG] [workers.queueworker] No more work. repositorygcworker stdout | 2025-02-14 01:55:33,022 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:55:33,022 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:33 UTC)" executed successfully gunicorn-secscan stdout | 2025-02-14 01:55:33,260 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:55:33,263 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:55:33,265 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:55:34,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:55:34,001 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:04.000511+00:00 (in 29.999506 seconds) buildlogsarchiver stdout | 2025-02-14 01:55:34,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:04 UTC)" (scheduled at 2025-02-14 01:55:34.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:55:34,002 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 55, 34, 1289), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:55:34,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:55:34,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:55:34,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:04 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:55:34,563 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:55:34,566 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:55:34,571 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:55:34,575 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:55:34,578 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:55:34,581 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:55:34,584 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:55:34,616 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:55:34,624 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:55:35,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:55:35,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:45.803718+00:00 (in 9.999563 seconds) notificationworker stdout | 2025-02-14 01:55:35,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:45 UTC)" (scheduled at 2025-02-14 01:55:35.803718+00:00) notificationworker stdout | 2025-02-14 01:55:35,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:55:35,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 55, 35, 804433), True, datetime.datetime(2025, 2, 14, 1, 55, 35, 804433), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:55:35,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:55:35,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:55:35,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:45 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:55:36,014 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:55:36,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:33.011632+00:00 (in 56.996427 seconds) repositorygcworker stdout | 2025-02-14 01:55:36,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:36 UTC)" (scheduled at 2025-02-14 01:55:36.014770+00:00) repositorygcworker stdout | 2025-02-14 01:55:36,015 [85] [DEBUG] [workers.queueworker] Running watchdog. repositorygcworker stdout | 2025-02-14 01:55:36,015 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:36 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:55:44,007 [243] [DEBUG] [app] Starting request: urn:request:0d8e7dda-5aa4-417f-bdd2-b62faad24e91 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:55:44,007 [242] [DEBUG] [app] Starting request: urn:request:21d7be18-495b-4c5c-9145-ddc6af76e908 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:55:44,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:44,009 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:44,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:55:44,012 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:55:44,012 [246] [DEBUG] [app] Starting request: urn:request:6bf18be6-35b2-4122-92eb-c87f04d86813 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:55:44,013 [246] [DEBUG] [app] Ending request: urn:request:6bf18be6-35b2-4122-92eb-c87f04d86813 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:6bf18be6-35b2-4122-92eb-c87f04d86813', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:55:44,013 [252] [DEBUG] [app] Starting request: urn:request:2cbb946d-d541-4d05-ab6f-0f5527de77f8 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:55:44,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-registry stdout | 2025-02-14 01:55:44,013 [252] [DEBUG] [app] Ending request: urn:request:2cbb946d-d541-4d05-ab6f-0f5527de77f8 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:2cbb946d-d541-4d05-ab6f-0f5527de77f8', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:55:44,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-registry stdout | 2025-02-14 01:55:44,013 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:55:44,014 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:55:44,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:44,015 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:44,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:55:44,017 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:55:44,018 [242] [DEBUG] [app] Starting request: urn:request:10314ef5-2114-40e3-b2f2-a9dae2e18ace (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:55:44,018 [242] [DEBUG] [app] Ending request: urn:request:10314ef5-2114-40e3-b2f2-a9dae2e18ace (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:10314ef5-2114-40e3-b2f2-a9dae2e18ace', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:55:44,018 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:55:44,019 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:55:44,019 [245] [DEBUG] [app] Starting request: urn:request:aa5eeef3-124f-4c22-ad1d-43210ab5420b (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:55:44,019 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:55:44,019 [245] [DEBUG] [app] Ending request: urn:request:aa5eeef3-124f-4c22-ad1d-43210ab5420b (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:aa5eeef3-124f-4c22-ad1d-43210ab5420b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:55:44,019 [242] [INFO] [data.database] Connection pooling disabled for postgresql nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:55:44,019 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:55:44,019 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:55:44,020 [243] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:55:44,020 [243] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:55:44,025 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:55:44,025 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:55:44,025 [243] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:55:44,025 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:55:44,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:55:44,032 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:55:44,034 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:55:44,034 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:55:44,036 [242] [DEBUG] [app] Ending request: urn:request:21d7be18-495b-4c5c-9145-ddc6af76e908 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:21d7be18-495b-4c5c-9145-ddc6af76e908', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:55:44,036 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:55:44,037 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:55:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:55:44,037 [243] [DEBUG] [app] Ending request: urn:request:0d8e7dda-5aa4-417f-bdd2-b62faad24e91 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:0d8e7dda-5aa4-417f-bdd2-b62faad24e91', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:55:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:55:44,037 [243] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:55:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.031) gunicorn-web stdout | 2025-02-14 01:55:44,037 [243] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:55:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" exportactionlogsworker stdout | 2025-02-14 01:55:44,889 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:55:44,994 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} namespacegcworker stdout | 2025-02-14 01:55:45,503 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:55:45,504 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:12.505687+00:00 (in 27.001533 seconds) namespacegcworker stdout | 2025-02-14 01:55:45,504 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:45 UTC)" (scheduled at 2025-02-14 01:55:45.503718+00:00) namespacegcworker stdout | 2025-02-14 01:55:45,504 [73] [DEBUG] [workers.queueworker] Getting work item from queue. namespacegcworker stdout | 2025-02-14 01:55:45,505 [73] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 55, 45, 504432), True, datetime.datetime(2025, 2, 14, 1, 55, 45, 504432), 0, 'namespacegc/%', 50, 1, 0]) namespacegcworker stdout | 2025-02-14 01:55:45,514 [73] [DEBUG] [workers.queueworker] No more work. namespacegcworker stdout | 2025-02-14 01:55:45,514 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:55:45,514 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:45 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:55:45,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:55:45,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:55.803718+00:00 (in 9.999558 seconds) notificationworker stdout | 2025-02-14 01:55:45,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:55 UTC)" (scheduled at 2025-02-14 01:55:45.803718+00:00) notificationworker stdout | 2025-02-14 01:55:45,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:55:45,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 55, 45, 804438), True, datetime.datetime(2025, 2, 14, 1, 55, 45, 804438), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:55:45,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:55:45,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:55:45,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:55:55 UTC)" executed successfully quotaregistrysizeworker stdout | 2025-02-14 01:55:46,009 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:55:46,010 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:46.009738+00:00 (in 59.999539 seconds) quotaregistrysizeworker stdout | 2025-02-14 01:55:46,010 [78] [INFO] [apscheduler.executors.default] Running job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:46 UTC)" (scheduled at 2025-02-14 01:55:46.009738+00:00) quotaregistrysizeworker stdout | 2025-02-14 01:55:46,010 [78] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."size_bytes", "t1"."running", "t1"."queued", "t1"."completed_ms" FROM "quotaregistrysize" AS "t1" LIMIT %s OFFSET %s', [1, 0]) quotaregistrysizeworker stdout | 2025-02-14 01:55:46,018 [78] [DEBUG] [data.database] Disconnecting from database. quotaregistrysizeworker stdout | 2025-02-14 01:55:46,018 [78] [INFO] [apscheduler.executors.default] Job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:46 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:55:47,069 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:55:48,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:55:48,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:59.123196+00:00 (in 10.997544 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:55:48,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:48 UTC)" (scheduled at 2025-02-14 01:55:48.125163+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:55:48,125 [87] [DEBUG] [workers.queueworker] Running watchdog. securityscanningnotificationworker stdout | 2025-02-14 01:55:48,125 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:48 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:55:50,431 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:55:50,831 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:55:52,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:55:52,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:22.310342+00:00 (in 29.999597 seconds) autopruneworker stdout | 2025-02-14 01:55:52,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:22 UTC)" (scheduled at 2025-02-14 01:55:52.310342+00:00) autopruneworker stdout | 2025-02-14 01:55:52,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494552316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:55:52,320 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:55:52,320 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:55:52,320 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:22 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:55:52,549 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} manifestsubjectbackfillworker stdout | 2025-02-14 01:55:52,900 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:55:52,901 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:05.898886+00:00 (in 12.997755 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:55:52,901 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:52 UTC)" (scheduled at 2025-02-14 01:55:52.900596+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:55:52,901 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."artifact_type_backfilled" = %s) OR ("t1"."artifact_type_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:55:52,910 [71] [DEBUG] [__main__] Manifest artifact_type backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:55:52,910 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:55:52,910 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:55:53,468 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:55:54,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:55:54,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:59.232325+00:00 (in 5.000629 seconds) securityworker stdout | 2025-02-14 01:55:54,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:24 UTC)" (scheduled at 2025-02-14 01:55:54.231161+00:00) securityworker stdout | 2025-02-14 01:55:54,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:55:54,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:55:54,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:55:54,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:55:54,243 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:55:54,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:55:54,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:55:54,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:55:54,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:55:55.392556+00:00 (in 1.001736 seconds) gcworker stdout | 2025-02-14 01:55:54,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:24 UTC)" (scheduled at 2025-02-14 01:55:54.390410+00:00) gcworker stdout | 2025-02-14 01:55:54,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:55:54,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:24 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:55:55,308 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} gcworker stdout | 2025-02-14 01:55:55,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:55:55,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:24.390410+00:00 (in 28.997453 seconds) gcworker stdout | 2025-02-14 01:55:55,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:25 UTC)" (scheduled at 2025-02-14 01:55:55.392556+00:00) gcworker stdout | 2025-02-14 01:55:55,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:55:55,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497855401, None, 1, 0]) gcworker stdout | 2025-02-14 01:55:55,404 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:55:55,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:55:55,703 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:55:55,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:55:55,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:05.803718+00:00 (in 9.999549 seconds) notificationworker stdout | 2025-02-14 01:55:55,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:05 UTC)" (scheduled at 2025-02-14 01:55:55.803718+00:00) notificationworker stdout | 2025-02-14 01:55:55,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:55:55,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 55, 55, 804463), True, datetime.datetime(2025, 2, 14, 1, 55, 55, 804463), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:55:55,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:55:55,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:55:55,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:05 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:55:56,494 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:55:56,908 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:55:57,274 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} nginx stdout | 10.129.2.30 - - [14/Feb/2025:01:55:57 +0000] "GET / HTTP/1.1" 301 169 "-" "python-requests/2.32.2" servicekey stdout | 2025-02-14 01:55:57,595 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:55:57,965 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} nginx stdout | 10.128.4.34 - - [14/Feb/2025:01:55:58 +0000] "GET / HTTP/1.1" 301 169 "-" "python-requests/2.32.2" securityworker stdout | 2025-02-14 01:55:58,112 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:55:58,366 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:55:58,593 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:55:58,745 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} gunicorn-web stdout | 2025-02-14 01:55:59,006 [242] [DEBUG] [app] Starting request: urn:request:a9e5f232-74cf-4bc0-9f70-9dd376f7ae3d (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:55:59,007 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:59,008 [243] [DEBUG] [app] Starting request: urn:request:a592c7fe-7cd1-42f7-a8cd-b3ffe700ad50 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:55:59,009 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:59,010 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:55:59,011 [246] [DEBUG] [app] Starting request: urn:request:aa0bd269-bff7-4b7b-9968-ad546f04b097 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:55:59,011 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:55:59,012 [246] [DEBUG] [app] Ending request: urn:request:aa0bd269-bff7-4b7b-9968-ad546f04b097 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:aa0bd269-bff7-4b7b-9968-ad546f04b097', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:55:59,012 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-web stdout | 2025-02-14 01:55:59,012 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:55:59,012 [252] [DEBUG] [app] Starting request: urn:request:2c891c55-6d78-4c65-86bd-53e0d7a1defe (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:55:59,013 [252] [DEBUG] [app] Ending request: urn:request:2c891c55-6d78-4c65-86bd-53e0d7a1defe (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:2c891c55-6d78-4c65-86bd-53e0d7a1defe', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:55:59,013 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:55:59,013 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:55:59,013 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:59,015 [243] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:55:59,015 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:55:59,016 [243] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:55:59,017 [242] [DEBUG] [app] Starting request: urn:request:f874385a-b08d-4a94-86de-2e6be7ebab32 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:55:59,017 [242] [DEBUG] [app] Ending request: urn:request:f874385a-b08d-4a94-86de-2e6be7ebab32 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:f874385a-b08d-4a94-86de-2e6be7ebab32', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:55:59,017 [244] [DEBUG] [app] Starting request: urn:request:e9ee1ffb-187a-40d4-ad7d-daa65e80a019 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:55:59,018 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:55:59,018 [244] [DEBUG] [app] Ending request: urn:request:e9ee1ffb-187a-40d4-ad7d-daa65e80a019 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:e9ee1ffb-187a-40d4-ad7d-daa65e80a019', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:55:59,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:55:59,018 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:55:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:55:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:55:59,018 [243] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:55:59,019 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:55:59,019 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:55:59,019 [243] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:55:59,019 [243] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:55:59,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:55:59,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:55:59,024 [243] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:55:59,024 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:55:59,031 [243] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:55:59,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:55:59,034 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:55:59,034 [243] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:55:59,036 [242] [DEBUG] [app] Ending request: urn:request:a9e5f232-74cf-4bc0-9f70-9dd376f7ae3d (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:a9e5f232-74cf-4bc0-9f70-9dd376f7ae3d', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:55:59,036 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:55:59,036 [243] [DEBUG] [app] Ending request: urn:request:a592c7fe-7cd1-42f7-a8cd-b3ffe700ad50 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:a592c7fe-7cd1-42f7-a8cd-b3ffe700ad50', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:55:59,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:55:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:55:59,036 [243] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:55:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.031) gunicorn-web stdout | 2025-02-14 01:55:59,037 [243] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:55:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:55:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) securityscanningnotificationworker stdout | 2025-02-14 01:55:59,123 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:55:59,123 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:48.125163+00:00 (in 49.001500 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:55:59,123 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:59 UTC)" (scheduled at 2025-02-14 01:55:59.123196+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:55:59,123 [87] [DEBUG] [workers.queueworker] Getting work item from queue. securityscanningnotificationworker stdout | 2025-02-14 01:55:59,124 [87] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 55, 59, 123918), True, datetime.datetime(2025, 2, 14, 1, 55, 59, 123918), 0, 'secscanv4/%', 50, 1, 0]) securityscanningnotificationworker stdout | 2025-02-14 01:55:59,133 [87] [DEBUG] [workers.queueworker] No more work. securityscanningnotificationworker stdout | 2025-02-14 01:55:59,133 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:55:59,133 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:56:59 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:55:59,211 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} securityworker stdout | 2025-02-14 01:55:59,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:55:59,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:24.231161+00:00 (in 24.998382 seconds) securityworker stdout | 2025-02-14 01:55:59,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:29 UTC)" (scheduled at 2025-02-14 01:55:59.232325+00:00) securityworker stdout | 2025-02-14 01:55:59,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:55:59,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:55:59,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:55:59,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:55:59,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:55:59,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:55:59,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:55:59,244 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:55:59,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:59,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:59,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:55:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:55:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:55:59,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:55:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:55:59,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:55:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:55:59,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:55:59,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:55:59,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:55:59,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 50, 59, 236501), 1, 2]) securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:59,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:59,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:55:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:55:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:55:59,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:55:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:55:59,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:55:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:55:59,251 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:55:59,251 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:55:59,251 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:55:59,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 50, 59, 236501), 1, 2]) securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:59,254 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:55:59,254 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:55:59,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:55:59,254 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:55:59,254 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:55:59,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:55:59,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:55:59,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:55:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:55:59,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:29 UTC)" executed successfully gcworker stdout | 2025-02-14 01:55:59,938 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} gunicorn-web stdout | 2025-02-14 01:56:01,354 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:56:01,357 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:56:01,360 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:56:01,363 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:56:01,366 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:56:01,668 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:56:02,539 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:56:02,850 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} gunicorn-secscan stdout | 2025-02-14 01:56:03,268 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:56:03,271 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:56:03,274 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:56:04,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:56:04,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:34.000511+00:00 (in 29.999569 seconds) buildlogsarchiver stdout | 2025-02-14 01:56:04,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:34 UTC)" (scheduled at 2025-02-14 01:56:04.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:56:04,002 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 56, 4, 1234), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:56:04,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:56:04,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:56:04,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:34 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:56:04,571 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:56:04,574 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:56:04,577 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:56:04,582 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:56:04,588 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:56:04,592 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:56:04,594 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:56:04,624 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:56:04,630 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:56:05,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:56:05,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:07.807092+00:00 (in 2.002938 seconds) notificationworker stdout | 2025-02-14 01:56:05,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:15 UTC)" (scheduled at 2025-02-14 01:56:05.803718+00:00) notificationworker stdout | 2025-02-14 01:56:05,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:56:05,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 56, 5, 804359), True, datetime.datetime(2025, 2, 14, 1, 56, 5, 804359), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:56:05,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:56:05,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:56:05,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:15 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:56:05,899 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:56:05,899 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:52.900596+00:00 (in 47.001209 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:56:05,899 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:05 UTC)" (scheduled at 2025-02-14 01:56:05.898886+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:56:05,900 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."subject_backfilled" = %s) OR ("t1"."subject_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:56:05,908 [71] [DEBUG] [__main__] Manifest subject backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:56:05,908 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:56:05,908 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:05 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:56:07,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:56:07,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:15.803718+00:00 (in 7.996171 seconds) notificationworker stdout | 2025-02-14 01:56:07,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:07 UTC)" (scheduled at 2025-02-14 01:56:07.807092+00:00) notificationworker stdout | 2025-02-14 01:56:07,807 [75] [DEBUG] [workers.queueworker] Running watchdog. notificationworker stdout | 2025-02-14 01:56:07,807 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:07 UTC)" executed successfully nginx stdout | 10.129.2.30 - - [14/Feb/2025:01:56:11 +0000] "GET / HTTP/1.1" 301 169 "-" "python-requests/2.32.2" namespacegcworker stdout | 2025-02-14 01:56:12,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:56:12,506 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:45.503718+00:00 (in 32.997559 seconds) namespacegcworker stdout | 2025-02-14 01:56:12,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:12 UTC)" (scheduled at 2025-02-14 01:56:12.505687+00:00) namespacegcworker stdout | 2025-02-14 01:56:12,506 [73] [DEBUG] [workers.queueworker] Running watchdog. namespacegcworker stdout | 2025-02-14 01:56:12,506 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:12 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:56:14,007 [242] [DEBUG] [app] Starting request: urn:request:269f4a99-2aa8-4b04-a8ea-332d11612505 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:56:14,007 [245] [DEBUG] [app] Starting request: urn:request:e9720609-8a27-41c7-8ce1-f888afc39f09 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:56:14,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:14,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:14,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:14,012 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:56:14,012 [246] [DEBUG] [app] Starting request: urn:request:21f31502-4217-4418-b674-84d831878450 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:56:14,012 [246] [DEBUG] [app] Ending request: urn:request:21f31502-4217-4418-b674-84d831878450 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:21f31502-4217-4418-b674-84d831878450', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:56:14,013 [253] [DEBUG] [app] Starting request: urn:request:e62dfca6-a8cb-4980-8ec5-ad547bf327ff (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:56:14,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-registry stdout | 2025-02-14 01:56:14,013 [253] [DEBUG] [app] Ending request: urn:request:e62dfca6-a8cb-4980-8ec5-ad547bf327ff (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:e62dfca6-a8cb-4980-8ec5-ad547bf327ff', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:56:14,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:56:14,013 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-web stdout | 2025-02-14 01:56:14,014 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:56:14,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:14,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:14,016 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:14,017 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:14,018 [242] [DEBUG] [app] Starting request: urn:request:feff9af8-40ae-4aa6-98ea-7d167041826a (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:56:14,018 [242] [DEBUG] [app] Ending request: urn:request:feff9af8-40ae-4aa6-98ea-7d167041826a (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:feff9af8-40ae-4aa6-98ea-7d167041826a', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:56:14,018 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:56:14,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:56:14,018 [244] [DEBUG] [app] Starting request: urn:request:e8dca07e-8079-431a-a5bc-df77e26a387f (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:56:14,019 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:56:14,019 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:56:14,019 [244] [DEBUG] [app] Ending request: urn:request:e8dca07e-8079-431a-a5bc-df77e26a387f (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:e8dca07e-8079-431a-a5bc-df77e26a387f', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:56:14,019 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:56:14,019 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:56:14,020 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:56:14,020 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:56:14,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:56:14,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:56:14,025 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:56:14,025 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:56:14,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:56:14,032 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:56:14,034 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:56:14,035 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:56:14,036 [242] [DEBUG] [app] Ending request: urn:request:269f4a99-2aa8-4b04-a8ea-332d11612505 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:269f4a99-2aa8-4b04-a8ea-332d11612505', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:56:14,036 [242] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:56:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:56:14,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:56:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:56:14,037 [245] [DEBUG] [app] Ending request: urn:request:e9720609-8a27-41c7-8ce1-f888afc39f09 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:e9720609-8a27-41c7-8ce1-f888afc39f09', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:56:14,037 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:56:14,038 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:56:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:56:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.031) exportactionlogsworker stdout | 2025-02-14 01:56:14,913 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:56:15,010 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} notificationworker stdout | 2025-02-14 01:56:15,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:56:15,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:25.803718+00:00 (in 9.999562 seconds) notificationworker stdout | 2025-02-14 01:56:15,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:25 UTC)" (scheduled at 2025-02-14 01:56:15.803718+00:00) notificationworker stdout | 2025-02-14 01:56:15,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:56:15,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 56, 15, 804363), True, datetime.datetime(2025, 2, 14, 1, 56, 15, 804363), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:56:15,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:56:15,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:56:15,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:25 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:56:17,105 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} namespacegcworker stdout | 2025-02-14 01:56:20,444 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:56:20,867 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:56:22,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:56:22,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:52.310342+00:00 (in 29.999564 seconds) autopruneworker stdout | 2025-02-14 01:56:22,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:52 UTC)" (scheduled at 2025-02-14 01:56:22.310342+00:00) autopruneworker stdout | 2025-02-14 01:56:22,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494582316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:56:22,320 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:56:22,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:56:22,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:52 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:56:22,586 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} buildlogsarchiver stdout | 2025-02-14 01:56:23,504 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:56:24,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:56:24,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:29.232325+00:00 (in 5.000636 seconds) securityworker stdout | 2025-02-14 01:56:24,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:54 UTC)" (scheduled at 2025-02-14 01:56:24.231161+00:00) securityworker stdout | 2025-02-14 01:56:24,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:56:24,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:56:24,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:56:24,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:56:24,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:56:24,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:56:24,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:54 UTC)" executed successfully gcworker stdout | 2025-02-14 01:56:24,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:56:24,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:25.392556+00:00 (in 1.001710 seconds) gcworker stdout | 2025-02-14 01:56:24,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:54 UTC)" (scheduled at 2025-02-14 01:56:24.390410+00:00) gcworker stdout | 2025-02-14 01:56:24,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:56:24,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:54 UTC)" executed successfully exportactionlogsworker stdout | 2025-02-14 01:56:25,215 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:56:25,215 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:30.212654+00:00 (in 4.996954 seconds) exportactionlogsworker stdout | 2025-02-14 01:56:25,215 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:25 UTC)" (scheduled at 2025-02-14 01:56:25.215238+00:00) exportactionlogsworker stdout | 2025-02-14 01:56:25,215 [63] [DEBUG] [workers.queueworker] Running watchdog. exportactionlogsworker stdout | 2025-02-14 01:56:25,216 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:25 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:56:25,335 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} gcworker stdout | 2025-02-14 01:56:25,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:56:25,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:54.390410+00:00 (in 28.997442 seconds) gcworker stdout | 2025-02-14 01:56:25,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:55 UTC)" (scheduled at 2025-02-14 01:56:25.392556+00:00) gcworker stdout | 2025-02-14 01:56:25,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:56:25,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497885401, None, 1, 0]) gcworker stdout | 2025-02-14 01:56:25,404 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:56:25,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:55 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:56:25,739 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:56:25,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:56:25,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:35.803718+00:00 (in 9.999590 seconds) notificationworker stdout | 2025-02-14 01:56:25,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:35 UTC)" (scheduled at 2025-02-14 01:56:25.803718+00:00) notificationworker stdout | 2025-02-14 01:56:25,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:56:25,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 56, 25, 804328), True, datetime.datetime(2025, 2, 14, 1, 56, 25, 804328), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:56:25,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:56:25,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:56:25,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:35 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:56:26,524 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:56:26,944 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:56:27,311 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:56:27,630 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:56:27,990 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:56:28,139 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:56:28,402 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:56:28,600 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:56:28,781 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} gunicorn-web stdout | 2025-02-14 01:56:29,007 [245] [DEBUG] [app] Starting request: urn:request:8fc9837d-5feb-40fa-b30c-1ce278935e1a (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:56:29,007 [244] [DEBUG] [app] Starting request: urn:request:168829eb-728e-4767-b7cd-d1e574daa45d (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:56:29,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:29,009 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:29,011 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:29,012 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:56:29,013 [253] [DEBUG] [app] Starting request: urn:request:9fc04a1b-cad8-4736-bb29-939832843a9e (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:56:29,013 [246] [DEBUG] [app] Starting request: urn:request:42100b15-69bf-4cf0-9975-3653f98afa5c (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:56:29,013 [253] [DEBUG] [app] Ending request: urn:request:9fc04a1b-cad8-4736-bb29-939832843a9e (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:9fc04a1b-cad8-4736-bb29-939832843a9e', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:56:29,013 [246] [DEBUG] [app] Ending request: urn:request:42100b15-69bf-4cf0-9975-3653f98afa5c (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:42100b15-69bf-4cf0-9975-3653f98afa5c', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:56:29,014 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:56:29,014 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:56:29,014 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:56:29,014 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:56:29,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:29,015 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:29,017 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:29,018 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:29,018 [242] [DEBUG] [app] Starting request: urn:request:83bc41d9-b446-4bb0-aecc-0d00904e9e56 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:56:29,019 [242] [DEBUG] [app] Ending request: urn:request:83bc41d9-b446-4bb0-aecc-0d00904e9e56 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:83bc41d9-b446-4bb0-aecc-0d00904e9e56', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:56:29,019 [245] [DEBUG] [app] Starting request: urn:request:a73f1760-1548-4506-bfc4-cc3e8cea704f (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:56:29,019 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:56:29,019 [245] [DEBUG] [app] Ending request: urn:request:a73f1760-1548-4506-bfc4-cc3e8cea704f (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:a73f1760-1548-4506-bfc4-cc3e8cea704f', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.002) gunicorn-web stdout | 2025-02-14 01:56:29,020 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:56:29,020 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:56:29,020 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:56:29,020 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:56:29,020 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:56:29,020 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:56:29,020 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:56:29,025 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:56:29,026 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:56:29,026 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:56:29,026 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:56:29,032 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:56:29,033 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:56:29,035 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:56:29,035 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:56:29,037 [245] [DEBUG] [app] Ending request: urn:request:8fc9837d-5feb-40fa-b30c-1ce278935e1a (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:8fc9837d-5feb-40fa-b30c-1ce278935e1a', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:56:29,037 [244] [DEBUG] [app] Ending request: urn:request:168829eb-728e-4767-b7cd-d1e574daa45d (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:168829eb-728e-4767-b7cd-d1e574daa45d', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:56:29,037 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:56:29,037 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:56:29,038 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:56:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:56:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.031) gunicorn-web stdout | 2025-02-14 01:56:29,038 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:56:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:56:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.031) securityworker stdout | 2025-02-14 01:56:29,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:56:29,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:54.231161+00:00 (in 24.998360 seconds) securityworker stdout | 2025-02-14 01:56:29,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:59 UTC)" (scheduled at 2025-02-14 01:56:29.232325+00:00) securityworker stdout | 2025-02-14 01:56:29,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:56:29,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:56:29,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:56:29,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:56:29,245 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:56:29,245 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:56:29,245 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:56:29,245 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:56:29,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) repositorygcworker stdout | 2025-02-14 01:56:29,247 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:29,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:29,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:56:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:56:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:56:29,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:56:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:56:29,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:56:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:56:29,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:56:29,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:56:29,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:56:29,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 51, 29, 236622), 1, 2]) securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:29,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:29,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:56:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:56:29,252 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:56:29,252 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:56:29,252 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:56:29,252 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:56:29,252 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:56:29,252 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:56:29,252 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:56:29,252 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:56:29,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 51, 29, 236622), 1, 2]) securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:29,255 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:29,255 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:56:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:56:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:56:29,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:56:29,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:56:29,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:56:29,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:56:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:56:29,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:56:59 UTC)" executed successfully gcworker stdout | 2025-02-14 01:56:29,959 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} exportactionlogsworker stdout | 2025-02-14 01:56:30,212 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:56:30,213 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:25.215238+00:00 (in 55.002143 seconds) exportactionlogsworker stdout | 2025-02-14 01:56:30,213 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:30 UTC)" (scheduled at 2025-02-14 01:56:30.212654+00:00) exportactionlogsworker stdout | 2025-02-14 01:56:30,213 [63] [DEBUG] [workers.queueworker] Getting work item from queue. exportactionlogsworker stdout | 2025-02-14 01:56:30,214 [63] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 56, 30, 213393), True, datetime.datetime(2025, 2, 14, 1, 56, 30, 213393), 0, 'exportactionlogs/%', 50, 1, 0]) exportactionlogsworker stdout | 2025-02-14 01:56:30,223 [63] [DEBUG] [workers.queueworker] No more work. exportactionlogsworker stdout | 2025-02-14 01:56:30,223 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:56:30,223 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:30 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:56:31,362 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:56:31,365 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:56:31,369 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:56:31,372 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:56:31,374 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:56:31,704 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:56:32,548 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:56:32,886 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} repositorygcworker stdout | 2025-02-14 01:56:33,011 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:56:33,012 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:36.014770+00:00 (in 3.002644 seconds) repositorygcworker stdout | 2025-02-14 01:56:33,012 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:33 UTC)" (scheduled at 2025-02-14 01:56:33.011632+00:00) repositorygcworker stdout | 2025-02-14 01:56:33,012 [85] [DEBUG] [workers.queueworker] Getting work item from queue. repositorygcworker stdout | 2025-02-14 01:56:33,013 [85] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 56, 33, 12476), True, datetime.datetime(2025, 2, 14, 1, 56, 33, 12476), 0, 'repositorygc/%', 50, 1, 0]) repositorygcworker stdout | 2025-02-14 01:56:33,022 [85] [DEBUG] [workers.queueworker] No more work. repositorygcworker stdout | 2025-02-14 01:56:33,022 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:56:33,022 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:33 UTC)" executed successfully gunicorn-secscan stdout | 2025-02-14 01:56:33,277 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:56:33,280 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:56:33,283 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:56:34,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:56:34,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:04.000511+00:00 (in 29.999568 seconds) buildlogsarchiver stdout | 2025-02-14 01:56:34,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:04 UTC)" (scheduled at 2025-02-14 01:56:34.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:56:34,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 56, 34, 1202), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:56:34,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:56:34,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:56:34,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:04 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:56:34,580 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:56:34,583 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:56:34,587 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:56:34,589 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:56:34,597 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:56:34,602 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:56:34,605 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:56:34,630 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:56:34,639 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} nginx stdout | 10.129.2.30 - - [14/Feb/2025:01:56:35 +0000] "GET / HTTP/1.1" 301 169 "-" "python-requests/2.32.2" notificationworker stdout | 2025-02-14 01:56:35,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:56:35,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:45.803718+00:00 (in 9.999540 seconds) notificationworker stdout | 2025-02-14 01:56:35,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:45 UTC)" (scheduled at 2025-02-14 01:56:35.803718+00:00) notificationworker stdout | 2025-02-14 01:56:35,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:56:35,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 56, 35, 804393), True, datetime.datetime(2025, 2, 14, 1, 56, 35, 804393), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:56:35,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:56:35,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:56:35,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:45 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:56:36,015 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:56:36,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:33.011632+00:00 (in 56.996344 seconds) repositorygcworker stdout | 2025-02-14 01:56:36,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:36 UTC)" (scheduled at 2025-02-14 01:56:36.014770+00:00) repositorygcworker stdout | 2025-02-14 01:56:36,015 [85] [DEBUG] [workers.queueworker] Running watchdog. repositorygcworker stdout | 2025-02-14 01:56:36,015 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:36 UTC)" executed successfully nginx stdout | 10.128.4.34 - - [14/Feb/2025:01:56:39 +0000] "GET / HTTP/1.1" 301 169 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:56:44,007 [245] [DEBUG] [app] Starting request: urn:request:38fccd03-d9ac-4224-8cc5-76cc83059c92 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:56:44,007 [242] [DEBUG] [app] Starting request: urn:request:6c00d759-4fa5-40f8-8d88-904ac627bce2 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:56:44,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:44,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:44,011 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:44,012 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:56:44,013 [251] [DEBUG] [app] Starting request: urn:request:dd79d7bc-15e8-474b-accf-01c268031835 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:56:44,013 [251] [DEBUG] [app] Ending request: urn:request:dd79d7bc-15e8-474b-accf-01c268031835 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:dd79d7bc-15e8-474b-accf-01c268031835', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:56:44,013 [253] [DEBUG] [app] Starting request: urn:request:476afc8a-adbd-4721-8a0f-051ee108c02d (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:56:44,013 [251] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-registry stdout | 2025-02-14 01:56:44,014 [253] [DEBUG] [app] Ending request: urn:request:476afc8a-adbd-4721-8a0f-051ee108c02d (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:476afc8a-adbd-4721-8a0f-051ee108c02d', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:56:44,014 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:56:44,014 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:56:44,014 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:56:44,015 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:44,016 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:44,017 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:44,018 [242] [DEBUG] [app] Starting request: urn:request:3ba63f5a-2ad9-422e-b5ba-454cc2f50930 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:56:44,018 [242] [DEBUG] [app] Ending request: urn:request:3ba63f5a-2ad9-422e-b5ba-454cc2f50930 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:3ba63f5a-2ad9-422e-b5ba-454cc2f50930', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:56:44,019 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:56:44,019 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:56:44,019 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:56:44,019 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:44,019 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:56:44,020 [244] [DEBUG] [app] Starting request: urn:request:65319da6-c8a3-4395-b373-ddddbb2f9bef (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:56:44,021 [244] [DEBUG] [app] Ending request: urn:request:65319da6-c8a3-4395-b373-ddddbb2f9bef (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:65319da6-c8a3-4395-b373-ddddbb2f9bef', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:56:44,021 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:56:44,021 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.002) gunicorn-web stdout | 2025-02-14 01:56:44,022 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:56:44,022 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:56:44,025 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:56:44,025 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:56:44,027 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:56:44,027 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:56:44,032 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:56:44,034 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:56:44,035 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:56:44,036 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:56:44,037 [245] [DEBUG] [app] Ending request: urn:request:38fccd03-d9ac-4224-8cc5-76cc83059c92 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:38fccd03-d9ac-4224-8cc5-76cc83059c92', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:56:44,038 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:56:44,038 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:56:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:56:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.032) gunicorn-web stdout | 2025-02-14 01:56:44,038 [242] [DEBUG] [app] Ending request: urn:request:6c00d759-4fa5-40f8-8d88-904ac627bce2 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:6c00d759-4fa5-40f8-8d88-904ac627bce2', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:56:44,039 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:56:44,039 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:56:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:56:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.033 47 0.033) exportactionlogsworker stdout | 2025-02-14 01:56:44,933 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:56:45,047 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} namespacegcworker stdout | 2025-02-14 01:56:45,503 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:56:45,504 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:12.505687+00:00 (in 27.001473 seconds) namespacegcworker stdout | 2025-02-14 01:56:45,504 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:45 UTC)" (scheduled at 2025-02-14 01:56:45.503718+00:00) namespacegcworker stdout | 2025-02-14 01:56:45,504 [73] [DEBUG] [workers.queueworker] Getting work item from queue. namespacegcworker stdout | 2025-02-14 01:56:45,505 [73] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 56, 45, 504518), True, datetime.datetime(2025, 2, 14, 1, 56, 45, 504518), 0, 'namespacegc/%', 50, 1, 0]) namespacegcworker stdout | 2025-02-14 01:56:45,514 [73] [DEBUG] [workers.queueworker] No more work. namespacegcworker stdout | 2025-02-14 01:56:45,514 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:56:45,514 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:45 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:56:45,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:56:45,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:55.803718+00:00 (in 9.999540 seconds) notificationworker stdout | 2025-02-14 01:56:45,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:55 UTC)" (scheduled at 2025-02-14 01:56:45.803718+00:00) notificationworker stdout | 2025-02-14 01:56:45,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:56:45,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 56, 45, 804386), True, datetime.datetime(2025, 2, 14, 1, 56, 45, 804386), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:56:45,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:56:45,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:56:45,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:56:55 UTC)" executed successfully quotaregistrysizeworker stdout | 2025-02-14 01:56:46,009 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:56:46,010 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:46.009738+00:00 (in 59.999479 seconds) quotaregistrysizeworker stdout | 2025-02-14 01:56:46,010 [78] [INFO] [apscheduler.executors.default] Running job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:46 UTC)" (scheduled at 2025-02-14 01:56:46.009738+00:00) quotaregistrysizeworker stdout | 2025-02-14 01:56:46,010 [78] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."size_bytes", "t1"."running", "t1"."queued", "t1"."completed_ms" FROM "quotaregistrysize" AS "t1" LIMIT %s OFFSET %s', [1, 0]) quotaregistrysizeworker stdout | 2025-02-14 01:56:46,018 [78] [DEBUG] [data.database] Disconnecting from database. quotaregistrysizeworker stdout | 2025-02-14 01:56:46,018 [78] [INFO] [apscheduler.executors.default] Job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:46 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:56:47,138 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:56:48,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:56:48,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:59.123196+00:00 (in 10.997580 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:56:48,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:48 UTC)" (scheduled at 2025-02-14 01:56:48.125163+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:56:48,125 [87] [DEBUG] [workers.queueworker] Running watchdog. securityscanningnotificationworker stdout | 2025-02-14 01:56:48,125 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:48 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:56:50,480 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:56:50,874 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:56:52,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:56:52,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:22.310342+00:00 (in 29.999572 seconds) autopruneworker stdout | 2025-02-14 01:56:52,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:22 UTC)" (scheduled at 2025-02-14 01:56:52.310342+00:00) autopruneworker stdout | 2025-02-14 01:56:52,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494612316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:56:52,320 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:56:52,320 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:56:52,320 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:22 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:56:52,610 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} manifestsubjectbackfillworker stdout | 2025-02-14 01:56:52,900 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:56:52,901 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:05.898886+00:00 (in 12.997761 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:56:52,901 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:52 UTC)" (scheduled at 2025-02-14 01:56:52.900596+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:56:52,901 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."artifact_type_backfilled" = %s) OR ("t1"."artifact_type_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:56:52,910 [71] [DEBUG] [__main__] Manifest artifact_type backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:56:52,910 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:56:52,910 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:56:53,517 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:56:54,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:56:54,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:59.232325+00:00 (in 5.000633 seconds) securityworker stdout | 2025-02-14 01:56:54,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:24 UTC)" (scheduled at 2025-02-14 01:56:54.231161+00:00) securityworker stdout | 2025-02-14 01:56:54,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:56:54,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:56:54,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:56:54,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:56:54,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:56:54,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:56:54,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:56:54,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:56:54,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:56:55.392556+00:00 (in 1.001730 seconds) gcworker stdout | 2025-02-14 01:56:54,390 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:24 UTC)" (scheduled at 2025-02-14 01:56:54.390410+00:00) gcworker stdout | 2025-02-14 01:56:54,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:56:54,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:24 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:56:55,362 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} gcworker stdout | 2025-02-14 01:56:55,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:56:55,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:24.390410+00:00 (in 28.997446 seconds) gcworker stdout | 2025-02-14 01:56:55,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:25 UTC)" (scheduled at 2025-02-14 01:56:55.392556+00:00) gcworker stdout | 2025-02-14 01:56:55,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:56:55,401 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497915401, None, 1, 0]) gcworker stdout | 2025-02-14 01:56:55,404 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:56:55,404 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:25 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:56:55,771 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:56:55,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:56:55,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:05.803718+00:00 (in 9.999586 seconds) notificationworker stdout | 2025-02-14 01:56:55,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:05 UTC)" (scheduled at 2025-02-14 01:56:55.803718+00:00) notificationworker stdout | 2025-02-14 01:56:55,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:56:55,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 56, 55, 804491), True, datetime.datetime(2025, 2, 14, 1, 56, 55, 804491), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:56:55,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:56:55,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:56:55,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:05 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:56:56,554 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} globalpromstats stdout | 2025-02-14 01:56:56,981 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:56:57,347 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:56:57,666 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:56:57,996 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:56:58,146 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:56:58,430 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:56:58,610 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:56:58,817 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} gunicorn-web stdout | 2025-02-14 01:56:59,008 [242] [DEBUG] [app] Starting request: urn:request:59bfed44-c31d-4159-8286-f12cae9c2246 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:56:59,008 [245] [DEBUG] [app] Starting request: urn:request:a6a50ef0-645a-428c-a7d9-6aed76d7a147 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:56:59,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:59,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:59,011 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:59,012 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:56:59,013 [252] [DEBUG] [app] Starting request: urn:request:5f50ef63-9137-4db0-b0cf-10c628209257 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:56:59,013 [252] [DEBUG] [app] Ending request: urn:request:5f50ef63-9137-4db0-b0cf-10c628209257 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:5f50ef63-9137-4db0-b0cf-10c628209257', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:56:59,013 [246] [DEBUG] [app] Starting request: urn:request:4900e920-bec0-475e-835c-2ea73815914b (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:56:59,014 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-registry stdout | 2025-02-14 01:56:59,014 [246] [DEBUG] [app] Ending request: urn:request:4900e920-bec0-475e-835c-2ea73815914b (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:4900e920-bec0-475e-835c-2ea73815914b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:56:59,014 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:56:59,014 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:56:59,014 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:59 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:56:59,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:59,016 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:56:59,017 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:59,018 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:56:59,018 [244] [DEBUG] [app] Starting request: urn:request:a0015591-54d9-4d69-b74b-b05ffd9bb558 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:56:59,019 [244] [DEBUG] [app] Ending request: urn:request:a0015591-54d9-4d69-b74b-b05ffd9bb558 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:a0015591-54d9-4d69-b74b-b05ffd9bb558', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:56:59,019 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:56:59,019 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:56:59,019 [243] [DEBUG] [app] Starting request: urn:request:7d2331ef-ccd0-480e-9e47-aac7fc8f5dad (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:56:59,020 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:56:59,020 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:56:59,020 [243] [DEBUG] [app] Ending request: urn:request:7d2331ef-ccd0-480e-9e47-aac7fc8f5dad (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:7d2331ef-ccd0-480e-9e47-aac7fc8f5dad', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:56:59 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:56:59,020 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:56:59 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:56:59,020 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:56:59,021 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:56:59,021 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:56:59,025 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:56:59,025 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:56:59,026 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:56:59,026 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:56:59,032 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:56:59,033 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:56:59,035 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:56:59,036 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:56:59,037 [242] [DEBUG] [app] Ending request: urn:request:59bfed44-c31d-4159-8286-f12cae9c2246 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:59bfed44-c31d-4159-8286-f12cae9c2246', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:56:59,037 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:56:59,038 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:56:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:56:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.031) gunicorn-web stdout | 2025-02-14 01:56:59,038 [245] [DEBUG] [app] Ending request: urn:request:a6a50ef0-645a-428c-a7d9-6aed76d7a147 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:a6a50ef0-645a-428c-a7d9-6aed76d7a147', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:56:59,039 [245] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:56:59 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.032) gunicorn-web stdout | 2025-02-14 01:56:59,039 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:56:59 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" securityscanningnotificationworker stdout | 2025-02-14 01:56:59,123 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:56:59,123 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:48.125163+00:00 (in 49.001530 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:56:59,123 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:59 UTC)" (scheduled at 2025-02-14 01:56:59.123196+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:56:59,123 [87] [DEBUG] [workers.queueworker] Getting work item from queue. securityscanningnotificationworker stdout | 2025-02-14 01:56:59,124 [87] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 56, 59, 123901), True, datetime.datetime(2025, 2, 14, 1, 56, 59, 123901), 0, 'secscanv4/%', 50, 1, 0]) securityscanningnotificationworker stdout | 2025-02-14 01:56:59,133 [87] [DEBUG] [workers.queueworker] No more work. securityscanningnotificationworker stdout | 2025-02-14 01:56:59,133 [87] [DEBUG] [data.database] Disconnecting from database. securityscanningnotificationworker stdout | 2025-02-14 01:56:59,133 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:57:59 UTC)" executed successfully securityworker stdout | 2025-02-14 01:56:59,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:56:59,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:24.231161+00:00 (in 24.998389 seconds) securityworker stdout | 2025-02-14 01:56:59,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:29 UTC)" (scheduled at 2025-02-14 01:56:59.232325+00:00) securityworker stdout | 2025-02-14 01:56:59,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:56:59,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:56:59,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:56:59,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:56:59,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:56:59,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:56:59,245 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:56:59,245 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:56:59,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:59,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:59,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:56:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:56:59,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:56:59,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:56:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:56:59,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:56:59,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:56:59,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:56:59,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:56:59,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:56:59,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 51, 59, 236531), 1, 2]) securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:59,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:59,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:56:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:56:59,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:56:59,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:56:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:56:59,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:56:59,251 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:56:59,251 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:56:59,251 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:56:59,251 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:56:59,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 51, 59, 236531), 1, 2]) securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:59,254 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:56:59,254 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:56:59,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:56:59,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:56:59,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:56:59,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:56:59,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:56:59,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:56:59 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:56:59,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:29 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:56:59,279 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gcworker stdout | 2025-02-14 01:56:59,995 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} gunicorn-web stdout | 2025-02-14 01:57:01,372 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:57:01,375 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:57:01,379 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:57:01,382 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:57:01,384 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:57:01,737 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:57:02,574 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:57:02,919 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} gunicorn-secscan stdout | 2025-02-14 01:57:03,284 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:57:03,287 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:57:03,290 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:57:04,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:57:04,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:34.000511+00:00 (in 29.999567 seconds) buildlogsarchiver stdout | 2025-02-14 01:57:04,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:34 UTC)" (scheduled at 2025-02-14 01:57:04.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:57:04,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 57, 4, 1202), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:57:04,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:57:04,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:57:04,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:34 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:57:04,592 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:57:04,595 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:57:04,599 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:57:04,603 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:57:04,607 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:57:04,610 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:57:04,613 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:57:04,643 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:57:04,649 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:57:05,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:57:05,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:07.807092+00:00 (in 2.002888 seconds) notificationworker stdout | 2025-02-14 01:57:05,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:15 UTC)" (scheduled at 2025-02-14 01:57:05.803718+00:00) notificationworker stdout | 2025-02-14 01:57:05,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:57:05,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 57, 5, 804507), True, datetime.datetime(2025, 2, 14, 1, 57, 5, 804507), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:57:05,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:57:05,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:57:05,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:15 UTC)" executed successfully manifestsubjectbackfillworker stdout | 2025-02-14 01:57:05,899 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:57:05,899 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:52.900596+00:00 (in 47.001213 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:57:05,899 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:05 UTC)" (scheduled at 2025-02-14 01:57:05.898886+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:57:05,900 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."subject_backfilled" = %s) OR ("t1"."subject_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:57:05,908 [71] [DEBUG] [__main__] Manifest subject backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:57:05,908 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:57:05,908 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_subject (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:05 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:57:07,807 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:57:07,807 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:15.803718+00:00 (in 7.996180 seconds) notificationworker stdout | 2025-02-14 01:57:07,807 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:07 UTC)" (scheduled at 2025-02-14 01:57:07.807092+00:00) notificationworker stdout | 2025-02-14 01:57:07,807 [75] [DEBUG] [workers.queueworker] Running watchdog. notificationworker stdout | 2025-02-14 01:57:07,807 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:07 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:57:12,505 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:57:12,506 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:45.503718+00:00 (in 32.997577 seconds) namespacegcworker stdout | 2025-02-14 01:57:12,506 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:12 UTC)" (scheduled at 2025-02-14 01:57:12.505687+00:00) namespacegcworker stdout | 2025-02-14 01:57:12,506 [73] [DEBUG] [workers.queueworker] Running watchdog. namespacegcworker stdout | 2025-02-14 01:57:12,506 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:12 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:57:14,007 [242] [DEBUG] [app] Starting request: urn:request:e71f4122-8a50-41c4-a853-468cddea81ca (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:57:14,008 [244] [DEBUG] [app] Starting request: urn:request:7b0dfc5f-500c-415b-b718-9ffd761a4307 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:57:14,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:14,010 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:14,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:57:14,012 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:57:14,012 [246] [DEBUG] [app] Starting request: urn:request:7f2cbe5d-28be-4425-bc61-36c1a1bff8b9 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:57:14,013 [246] [DEBUG] [app] Ending request: urn:request:7f2cbe5d-28be-4425-bc61-36c1a1bff8b9 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:7f2cbe5d-28be-4425-bc61-36c1a1bff8b9', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:57:14,013 [253] [DEBUG] [app] Starting request: urn:request:7cf2ff60-7440-48e0-b897-f27d210f1493 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:57:14,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-registry stdout | 2025-02-14 01:57:14,014 [253] [DEBUG] [app] Ending request: urn:request:7cf2ff60-7440-48e0-b897-f27d210f1493 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:7cf2ff60-7440-48e0-b897-f27d210f1493', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:57:14,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.002) gunicorn-registry stdout | 2025-02-14 01:57:14,014 [253] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:14 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:57:14,014 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:57:14,014 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:14,015 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:14,017 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:57:14,017 [244] [DEBUG] [app] Starting request: urn:request:08017bc1-a1a2-41b8-b633-3aca657eefe1 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:57:14,018 [244] [DEBUG] [app] Ending request: urn:request:08017bc1-a1a2-41b8-b633-3aca657eefe1 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:08017bc1-a1a2-41b8-b633-3aca657eefe1', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:57:14,018 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:57:14,018 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:57:14,018 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:57:14,018 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:57:14,018 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:57:14,019 [243] [DEBUG] [app] Starting request: urn:request:24f58475-148c-4e6d-83cb-d660ce517f0d (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:57:14,020 [243] [DEBUG] [app] Ending request: urn:request:24f58475-148c-4e6d-83cb-d660ce517f0d (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:24f58475-148c-4e6d-83cb-d660ce517f0d', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:57:14,020 [243] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:14 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:57:14,020 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:14 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:57:14,021 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:57:14,021 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:57:14,024 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:57:14,024 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:57:14,026 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:57:14,026 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:57:14,031 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:57:14,033 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:57:14,034 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:57:14,035 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:57:14,036 [242] [DEBUG] [app] Ending request: urn:request:e71f4122-8a50-41c4-a853-468cddea81ca (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:e71f4122-8a50-41c4-a853-468cddea81ca', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:57:14,036 [242] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:57:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:57:14,036 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:57:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:57:14,038 [244] [DEBUG] [app] Ending request: urn:request:7b0dfc5f-500c-415b-b718-9ffd761a4307 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:7b0dfc5f-500c-415b-b718-9ffd761a4307', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:57:14,038 [244] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:57:14 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.031) gunicorn-web stdout | 2025-02-14 01:57:14,038 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:57:14 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" exportactionlogsworker stdout | 2025-02-14 01:57:14,971 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:57:15,083 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} notificationworker stdout | 2025-02-14 01:57:15,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:57:15,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:25.803718+00:00 (in 9.999563 seconds) notificationworker stdout | 2025-02-14 01:57:15,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:25 UTC)" (scheduled at 2025-02-14 01:57:15.803718+00:00) notificationworker stdout | 2025-02-14 01:57:15,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:57:15,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 57, 15, 804358), True, datetime.datetime(2025, 2, 14, 1, 57, 15, 804358), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:57:15,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:57:15,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:57:15,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:25 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:57:17,157 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} namespacegcworker stdout | 2025-02-14 01:57:20,516 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:57:20,910 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} nginx stdout | 10.128.4.34 - - [14/Feb/2025:01:57:21 +0000] "GET / HTTP/1.1" 301 169 "-" "python-requests/2.32.2" autopruneworker stdout | 2025-02-14 01:57:22,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:57:22,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:52.310342+00:00 (in 29.999529 seconds) autopruneworker stdout | 2025-02-14 01:57:22,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:52 UTC)" (scheduled at 2025-02-14 01:57:22.310342+00:00) autopruneworker stdout | 2025-02-14 01:57:22,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494642316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:57:22,321 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:57:22,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:57:22,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:52 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:57:22,647 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} buildlogsarchiver stdout | 2025-02-14 01:57:23,553 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:57:24,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:57:24,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:29.232325+00:00 (in 5.000656 seconds) securityworker stdout | 2025-02-14 01:57:24,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:54 UTC)" (scheduled at 2025-02-14 01:57:24.231161+00:00) securityworker stdout | 2025-02-14 01:57:24,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:57:24,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:57:24,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:57:24,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:57:24,245 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:57:24,247 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:57:24,247 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:54 UTC)" executed successfully gcworker stdout | 2025-02-14 01:57:24,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:57:24,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:25.392556+00:00 (in 1.001694 seconds) gcworker stdout | 2025-02-14 01:57:24,391 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:54 UTC)" (scheduled at 2025-02-14 01:57:24.390410+00:00) gcworker stdout | 2025-02-14 01:57:24,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:57:24,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:54 UTC)" executed successfully exportactionlogsworker stdout | 2025-02-14 01:57:25,215 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:57:25,215 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:30.212654+00:00 (in 4.996946 seconds) exportactionlogsworker stdout | 2025-02-14 01:57:25,215 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:25 UTC)" (scheduled at 2025-02-14 01:57:25.215238+00:00) exportactionlogsworker stdout | 2025-02-14 01:57:25,215 [63] [DEBUG] [workers.queueworker] Running watchdog. exportactionlogsworker stdout | 2025-02-14 01:57:25,216 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:25 UTC)" executed successfully gcworker stdout | 2025-02-14 01:57:25,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:57:25,393 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:54.390410+00:00 (in 28.997417 seconds) gcworker stdout | 2025-02-14 01:57:25,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:55 UTC)" (scheduled at 2025-02-14 01:57:25.392556+00:00) gcworker stdout | 2025-02-14 01:57:25,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) storagereplication stdout | 2025-02-14 01:57:25,398 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} gcworker stdout | 2025-02-14 01:57:25,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497945401, None, 1, 0]) gcworker stdout | 2025-02-14 01:57:25,404 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:57:25,404 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:55 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:57:25,789 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} notificationworker stdout | 2025-02-14 01:57:25,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:57:25,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:35.803718+00:00 (in 9.999590 seconds) notificationworker stdout | 2025-02-14 01:57:25,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:35 UTC)" (scheduled at 2025-02-14 01:57:25.803718+00:00) notificationworker stdout | 2025-02-14 01:57:25,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:57:25,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 57, 25, 804336), True, datetime.datetime(2025, 2, 14, 1, 57, 25, 804336), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:57:25,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:57:25,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:57:25,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:35 UTC)" executed successfully manifestbackfillworker stdout | 2025-02-14 01:57:26,591 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} nginx stdout | 10.129.2.30 - - [14/Feb/2025:01:57:26 +0000] "GET / HTTP/1.1" 301 169 "-" "python-requests/2.32.2" globalpromstats stdout | 2025-02-14 01:57:27,002 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} builder stdout | 2025-02-14 01:57:27,362 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} servicekey stdout | 2025-02-14 01:57:27,692 [89] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'servicekeyworker.py', 'pid': '89'} logrotateworker stdout | 2025-02-14 01:57:28,003 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} securityworker stdout | 2025-02-14 01:57:28,182 [88] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityworker.py', 'pid': '88'} blobuploadcleanupworker stdout | 2025-02-14 01:57:28,456 [57] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} autopruneworker stdout | 2025-02-14 01:57:28,644 [56] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} repositoryactioncounter stdout | 2025-02-14 01:57:28,851 [81] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositoryactioncounter.py', 'pid': '81'} gunicorn-web stdout | 2025-02-14 01:57:29,007 [242] [DEBUG] [app] Starting request: urn:request:53bf0a38-9642-4fbc-bc9c-0dc9657f88f5 (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:57:29,008 [244] [DEBUG] [app] Starting request: urn:request:b6b2c370-95dc-466a-a7f4-d5e7e15a0b1b (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:57:29,008 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:29,009 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:29,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:57:29,012 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:57:29,012 [252] [DEBUG] [app] Starting request: urn:request:be1ffdae-c2cc-4266-bb86-eb84b393f835 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:57:29,012 [252] [DEBUG] [app] Ending request: urn:request:be1ffdae-c2cc-4266-bb86-eb84b393f835 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:be1ffdae-c2cc-4266-bb86-eb84b393f835', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:57:29,013 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.001) gunicorn-registry stdout | 2025-02-14 01:57:29,013 [251] [DEBUG] [app] Starting request: urn:request:c895d6ab-d3de-4535-9916-9b6c1302ae53 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:57:29,013 [251] [DEBUG] [app] Ending request: urn:request:c895d6ab-d3de-4535-9916-9b6c1302ae53 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:c895d6ab-d3de-4535-9916-9b6c1302ae53', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:57:29,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-registry stdout | 2025-02-14 01:57:29,013 [251] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:29 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:57:29,013 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:57:29,015 [244] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:29,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:29,016 [244] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:57:29,017 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:57:29,018 [245] [DEBUG] [app] Starting request: urn:request:9a9dc39b-8159-4243-98fd-6822a3487cc4 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:57:29,018 [244] [DEBUG] [app] Starting request: urn:request:81fd8047-c1bb-454b-8c8a-76faf001fff9 (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:57:29,018 [245] [DEBUG] [app] Ending request: urn:request:9a9dc39b-8159-4243-98fd-6822a3487cc4 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:9a9dc39b-8159-4243-98fd-6822a3487cc4', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:57:29,018 [244] [DEBUG] [app] Ending request: urn:request:81fd8047-c1bb-454b-8c8a-76faf001fff9 (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:81fd8047-c1bb-454b-8c8a-76faf001fff9', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:57:29,018 [245] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:29 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.001) gunicorn-web stdout | 2025-02-14 01:57:29,019 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:29 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:57:29,019 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:57:29,019 [244] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:57:29,019 [244] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:57:29,019 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:57:29,019 [244] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:57:29,019 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:57:29,025 [244] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:57:29,025 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:57:29,025 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:57:29,025 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:57:29,032 [244] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:57:29,032 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:57:29,034 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:57:29,034 [244] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:57:29,037 [244] [DEBUG] [app] Ending request: urn:request:b6b2c370-95dc-466a-a7f4-d5e7e15a0b1b (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:b6b2c370-95dc-466a-a7f4-d5e7e15a0b1b', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:57:29,037 [242] [DEBUG] [app] Ending request: urn:request:53bf0a38-9642-4fbc-bc9c-0dc9657f88f5 (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:53bf0a38-9642-4fbc-bc9c-0dc9657f88f5', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:57:29,037 [244] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:57:29,037 [242] [DEBUG] [data.database] Disconnecting from database. nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:57:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:57:29 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.031 47 0.032) gunicorn-web stdout | 2025-02-14 01:57:29,037 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:57:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" gunicorn-web stdout | 2025-02-14 01:57:29,037 [244] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:57:29 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" securityworker stdout | 2025-02-14 01:57:29,232 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:57:29,232 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:54.231161+00:00 (in 24.998373 seconds) securityworker stdout | 2025-02-14 01:57:29,232 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:59 UTC)" (scheduled at 2025-02-14 01:57:29.232325+00:00) securityworker stdout | 2025-02-14 01:57:29,233 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:57:29,233 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:57:29,235 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:57:29,236 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:57:29,244 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stdout | 2025-02-14 01:57:29,244 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:57:29,244 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:57:29,244 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:57:29,245 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" LEFT OUTER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE ((("t2"."id" IS %s) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [None, 1, 2]) securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:57:29,248 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:57:29,248 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:57:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:57:29,248 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:57:29,248 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:57:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:57:29,248 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:57:29,248 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:57:29,248 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:57:29,248 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:57:29,248 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:57:29,249 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((("t2"."index_status" = %s) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-1, datetime.datetime(2025, 2, 14, 1, 52, 29, 236440), 1, 2]) securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:57:29,251 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:57:29,251 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:57:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:57:29,251 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:57:29,251 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:57:29,251 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:57:29,251 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:57:29,252 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Total range: 1-2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:57:29,252 [88] [DEBUG] [util.migrate.allocator] Selected random hole 0 with 1 total holes securityworker stdout | 2025-02-14 01:57:29,252 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stdout | 2025-02-14 01:57:29,252 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Selecting from hole range: 1-2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Rand max bound: 1 securityworker stdout | 2025-02-14 01:57:29,252 [88] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled", "t2"."id", "t2"."manifest_id", "t2"."repository_id", "t2"."index_status", "t2"."error_json", "t2"."last_indexed", "t2"."indexer_hash", "t2"."indexer_version", "t2"."metadata_json" FROM "manifest" AS "t1" INNER JOIN "manifestsecuritystatus" AS "t2" ON ("t2"."manifest_id" = "t1"."id") WHERE (((((("t2"."index_status" != %s) AND ("t2"."index_status" != %s)) AND ("t2"."indexer_hash" != %s)) AND ("t2"."last_indexed" < %s)) AND ("t1"."id" >= %s)) AND ("t1"."id" < %s)) ORDER BY "t1"."id"', [-2, -3, '37b46b4a70b6f1a19d5e4e18d21f57ff', datetime.datetime(2025, 2, 14, 1, 52, 29, 236440), 1, 2]) securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:57:29,255 [88] [DEBUG] [util.migrate.allocator] No candidates, marking entire block completed 1-2 by worker securityworker stdout | 2025-02-14 01:57:29,255 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stdout | 2025-02-14 01:57:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stdout | 2025-02-14 01:57:29,255 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stdout | 2025-02-14 01:57:29,255 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stdout | 2025-02-14 01:57:29,255 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stdout | 2025-02-14 01:57:29,255 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:57:29,255 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Marking the range completed: 1-2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new max to: 1 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Discarding block and setting new min to: 2 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Total blocks: 0 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] Total range: 2-1 securityworker stderr | 2025-02-14 01:57:29 [88] [DEBUG] [util.migrate.allocator] No more work by worker securityworker stdout | 2025-02-14 01:57:29,255 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_recent_manifests_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:57:59 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:57:29,314 [85] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'repositorygcworker.py', 'pid': '85'} gcworker stdout | 2025-02-14 01:57:30,031 [64] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} exportactionlogsworker stdout | 2025-02-14 01:57:30,212 [63] [DEBUG] [apscheduler.scheduler] Looking for jobs to run exportactionlogsworker stdout | 2025-02-14 01:57:30,213 [63] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:58:25.215238+00:00 (in 55.002156 seconds) exportactionlogsworker stdout | 2025-02-14 01:57:30,213 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:30 UTC)" (scheduled at 2025-02-14 01:57:30.212654+00:00) exportactionlogsworker stdout | 2025-02-14 01:57:30,213 [63] [DEBUG] [workers.queueworker] Getting work item from queue. exportactionlogsworker stdout | 2025-02-14 01:57:30,214 [63] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 57, 30, 213366), True, datetime.datetime(2025, 2, 14, 1, 57, 30, 213366), 0, 'exportactionlogs/%', 50, 1, 0]) exportactionlogsworker stdout | 2025-02-14 01:57:30,223 [63] [DEBUG] [workers.queueworker] No more work. exportactionlogsworker stdout | 2025-02-14 01:57:30,223 [63] [DEBUG] [data.database] Disconnecting from database. exportactionlogsworker stdout | 2025-02-14 01:57:30,223 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:30 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:57:31,380 [243] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | 2025-02-14 01:57:31,383 [244] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '244'} gunicorn-web stdout | 2025-02-14 01:57:31,386 [242] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | 2025-02-14 01:57:31,389 [68] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | 2025-02-14 01:57:31,392 [245] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '245'} chunkcleanupworker stdout | 2025-02-14 01:57:31,770 [60] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} manifestsubjectbackfillworker stdout | 2025-02-14 01:57:32,610 [71] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestsubjectbackfillworker.py', 'pid': '71'} securityscanningnotificationworker stdout | 2025-02-14 01:57:32,957 [87] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'securityscanningnotificationworker.py', 'pid': '87'} repositorygcworker stdout | 2025-02-14 01:57:33,011 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:57:33,012 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:36.014770+00:00 (in 3.002664 seconds) repositorygcworker stdout | 2025-02-14 01:57:33,012 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:33 UTC)" (scheduled at 2025-02-14 01:57:33.011632+00:00) repositorygcworker stdout | 2025-02-14 01:57:33,012 [85] [DEBUG] [workers.queueworker] Getting work item from queue. repositorygcworker stdout | 2025-02-14 01:57:33,013 [85] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 57, 33, 12400), True, datetime.datetime(2025, 2, 14, 1, 57, 33, 12400), 0, 'repositorygc/%', 50, 1, 0]) repositorygcworker stdout | 2025-02-14 01:57:33,022 [85] [DEBUG] [workers.queueworker] No more work. repositorygcworker stdout | 2025-02-14 01:57:33,022 [85] [DEBUG] [data.database] Disconnecting from database. repositorygcworker stdout | 2025-02-14 01:57:33,022 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:33 UTC)" executed successfully gunicorn-secscan stdout | 2025-02-14 01:57:33,293 [67] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | 2025-02-14 01:57:33,296 [238] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | 2025-02-14 01:57:33,299 [237] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '237'} buildlogsarchiver stdout | 2025-02-14 01:57:34,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:57:34,000 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:58:04.000511+00:00 (in 29.999541 seconds) buildlogsarchiver stdout | 2025-02-14 01:57:34,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:04 UTC)" (scheduled at 2025-02-14 01:57:34.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:57:34,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 57, 34, 1229), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:57:34,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:57:34,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:57:34,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:04 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:57:34,603 [247] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '247'} gunicorn-registry stdout | 2025-02-14 01:57:34,605 [248] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '248'} gunicorn-registry stdout | 2025-02-14 01:57:34,610 [250] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '250'} gunicorn-registry stdout | 2025-02-14 01:57:34,614 [246] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '246'} gunicorn-registry stdout | 2025-02-14 01:57:34,617 [252] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '252'} gunicorn-registry stdout | 2025-02-14 01:57:34,620 [253] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '253'} gunicorn-registry stdout | 2025-02-14 01:57:34,623 [66] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | 2025-02-14 01:57:34,654 [249] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '249'} gunicorn-registry stdout | 2025-02-14 01:57:34,658 [251] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '251'} notificationworker stdout | 2025-02-14 01:57:35,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:57:35,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:45.803718+00:00 (in 9.999533 seconds) notificationworker stdout | 2025-02-14 01:57:35,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:45 UTC)" (scheduled at 2025-02-14 01:57:35.803718+00:00) notificationworker stdout | 2025-02-14 01:57:35,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:57:35,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 57, 35, 804393), True, datetime.datetime(2025, 2, 14, 1, 57, 35, 804393), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:57:35,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:57:35,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:57:35,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:45 UTC)" executed successfully repositorygcworker stdout | 2025-02-14 01:57:36,014 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:57:36,015 [85] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:58:33.011632+00:00 (in 56.996433 seconds) repositorygcworker stdout | 2025-02-14 01:57:36,015 [85] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:36 UTC)" (scheduled at 2025-02-14 01:57:36.014770+00:00) repositorygcworker stdout | 2025-02-14 01:57:36,015 [85] [DEBUG] [workers.queueworker] Running watchdog. repositorygcworker stdout | 2025-02-14 01:57:36,015 [85] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:36 UTC)" executed successfully gunicorn-web stdout | 2025-02-14 01:57:44,007 [242] [DEBUG] [app] Starting request: urn:request:0fe36ee2-d71b-412e-8c22-eb7204a65d9d (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:57:44,007 [245] [DEBUG] [app] Starting request: urn:request:32bef14c-019d-4531-a6e5-37eee69ab4ef (/health/instance) {'X-Forwarded-For': '10.129.2.2'} gunicorn-web stdout | 2025-02-14 01:57:44,009 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:44,009 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:44,011 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:57:44,011 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-registry stdout | 2025-02-14 01:57:44,012 [246] [DEBUG] [app] Starting request: urn:request:2e6211f0-cc36-4dc4-a687-f4a029204d11 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:57:44,012 [252] [DEBUG] [app] Starting request: urn:request:a861417d-be06-443b-a8aa-ecc256c50e67 (/v1/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-registry stdout | 2025-02-14 01:57:44,013 [246] [DEBUG] [app] Ending request: urn:request:2e6211f0-cc36-4dc4-a687-f4a029204d11 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:2e6211f0-cc36-4dc4-a687-f4a029204d11', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:57:44,013 [252] [DEBUG] [app] Ending request: urn:request:a861417d-be06-443b-a8aa-ecc256c50e67 (/v1/_internal_ping) {'endpoint': 'v1.internal_ping', 'request_id': 'urn:request:a861417d-be06-443b-a8aa-ecc256c50e67', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/v1/_internal_ping', 'path': '/v1/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '5cffa2c6', 'user-agent': 'python-requests/2.32.2'} gunicorn-registry stdout | 2025-02-14 01:57:44,013 [246] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 162 0.001) gunicorn-registry stdout | 2025-02-14 01:57:44,013 [252] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:57:44,013 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:57:44,013 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /v1/_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:44 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 162 0.002) gunicorn-web stdout | 2025-02-14 01:57:44,014 [245] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:44,015 [242] [DEBUG] [urllib3.connectionpool] Resetting dropped connection: localhost gunicorn-web stdout | 2025-02-14 01:57:44,016 [245] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:57:44,017 [242] [DEBUG] [app] Starting request: urn:request:2dde7a13-0651-42ca-9c14-cc2acecd099b (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:57:44,017 [242] [DEBUG] [app] Ending request: urn:request:2dde7a13-0651-42ca-9c14-cc2acecd099b (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:2dde7a13-0651-42ca-9c14-cc2acecd099b', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:57:44,018 [242] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" gunicorn-web stdout | 2025-02-14 01:57:44,018 [245] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.002 159 0.001) gunicorn-web stdout | 2025-02-14 01:57:44,018 [245] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:57:44,018 [245] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:57:44,019 [242] [WARNING] [py.warnings] /app/lib/python3.9/site-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings gunicorn-web stdout | warnings.warn( gunicorn-web stdout | 2025-02-14 01:57:44,020 [244] [DEBUG] [app] Starting request: urn:request:3e0a91e2-cc1c-4de7-bfc1-ef737704e77d (/_internal_ping) {'X-Forwarded-For': '127.0.0.1'} gunicorn-web stdout | 2025-02-14 01:57:44,020 [244] [DEBUG] [app] Ending request: urn:request:3e0a91e2-cc1c-4de7-bfc1-ef737704e77d (/_internal_ping) {'endpoint': 'web.internal_ping', 'request_id': 'urn:request:3e0a91e2-cc1c-4de7-bfc1-ef737704e77d', 'remote_addr': '127.0.0.1', 'http_method': 'GET', 'original_url': 'https://localhost/_internal_ping', 'path': '/_internal_ping', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'python-requests/2.32.2'} gunicorn-web stdout | 2025-02-14 01:57:44,020 [244] [INFO] [gunicorn.access] 127.0.0.1 - - [14/Feb/2025:01:57:44 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.32.2" nginx stdout | 127.0.0.1 (-) - - [14/Feb/2025:01:57:44 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.32.2" (0.001 159 0.002) gunicorn-web stdout | 2025-02-14 01:57:44,020 [242] [DEBUG] [urllib3.connectionpool] https://localhost:8443 "GET /_internal_ping HTTP/1.1" 200 4 gunicorn-web stdout | 2025-02-14 01:57:44,021 [242] [DEBUG] [data.model.health] Validating database connection. gunicorn-web stdout | 2025-02-14 01:57:44,021 [242] [INFO] [data.database] Connection pooling disabled for postgresql gunicorn-web stdout | 2025-02-14 01:57:44,024 [245] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:57:44,024 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:57:44,026 [242] [DEBUG] [data.model.health] Checking for existence of team roles, timeout 5000 ms. gunicorn-web stdout | 2025-02-14 01:57:44,026 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (5000,)) gunicorn-web stdout | 2025-02-14 01:57:44,031 [245] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:57:44,033 [242] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "teamrole" AS "t1" LIMIT %s', [1]) gunicorn-web stdout | 2025-02-14 01:57:44,034 [245] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:57:44,035 [242] [DEBUG] [peewee] ('SET statement_timeout=%s;', (0,)) gunicorn-web stdout | 2025-02-14 01:57:44,036 [245] [DEBUG] [app] Ending request: urn:request:32bef14c-019d-4531-a6e5-37eee69ab4ef (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:32bef14c-019d-4531-a6e5-37eee69ab4ef', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:57:44,036 [245] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:57:44,036 [245] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:57:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:57:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.030 47 0.030) gunicorn-web stdout | 2025-02-14 01:57:44,038 [242] [DEBUG] [app] Ending request: urn:request:0fe36ee2-d71b-412e-8c22-eb7204a65d9d (/health/instance) {'endpoint': 'web.instance_health', 'request_id': 'urn:request:0fe36ee2-d71b-412e-8c22-eb7204a65d9d', 'remote_addr': '10.129.2.2', 'http_method': 'GET', 'original_url': 'https://10.129.2.28/health/instance', 'path': '/health/instance', 'parameters': {}, 'json_body': None, 'confsha': '3dba1530', 'user-agent': 'kube-probe/1.30'} gunicorn-web stdout | 2025-02-14 01:57:44,038 [242] [DEBUG] [data.database] Disconnecting from database. gunicorn-web stdout | 2025-02-14 01:57:44,038 [242] [INFO] [gunicorn.access] 10.129.2.2 - - [14/Feb/2025:01:57:44 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.30" nginx stdout | 10.129.2.2 (-) - - [14/Feb/2025:01:57:44 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.30" (0.032 47 0.032) exportactionlogsworker stdout | 2025-02-14 01:57:45,007 [63] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'exportactionlogsworker.py', 'pid': '63'} quotaregistrysizeworker stdout | 2025-02-14 01:57:45,097 [78] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'quotaregistrysizeworker.py', 'pid': '78'} namespacegcworker stdout | 2025-02-14 01:57:45,503 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:57:45,504 [73] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:58:12.505687+00:00 (in 27.001528 seconds) namespacegcworker stdout | 2025-02-14 01:57:45,504 [73] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:45 UTC)" (scheduled at 2025-02-14 01:57:45.503718+00:00) namespacegcworker stdout | 2025-02-14 01:57:45,504 [73] [DEBUG] [workers.queueworker] Getting work item from queue. namespacegcworker stdout | 2025-02-14 01:57:45,505 [73] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 57, 45, 504503), True, datetime.datetime(2025, 2, 14, 1, 57, 45, 504503), 0, 'namespacegc/%', 50, 1, 0]) namespacegcworker stdout | 2025-02-14 01:57:45,515 [73] [DEBUG] [workers.queueworker] No more work. namespacegcworker stdout | 2025-02-14 01:57:45,515 [73] [DEBUG] [data.database] Disconnecting from database. namespacegcworker stdout | 2025-02-14 01:57:45,515 [73] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:45 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:57:45,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:57:45,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:55.803718+00:00 (in 9.999556 seconds) notificationworker stdout | 2025-02-14 01:57:45,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:55 UTC)" (scheduled at 2025-02-14 01:57:45.803718+00:00) notificationworker stdout | 2025-02-14 01:57:45,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:57:45,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 57, 45, 804368), True, datetime.datetime(2025, 2, 14, 1, 57, 45, 804368), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:57:45,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:57:45,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:57:45,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:57:55 UTC)" executed successfully quotaregistrysizeworker stdout | 2025-02-14 01:57:46,009 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:57:46,010 [78] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:58:46.009738+00:00 (in 59.999516 seconds) quotaregistrysizeworker stdout | 2025-02-14 01:57:46,010 [78] [INFO] [apscheduler.executors.default] Running job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:46 UTC)" (scheduled at 2025-02-14 01:57:46.009738+00:00) quotaregistrysizeworker stdout | 2025-02-14 01:57:46,010 [78] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."size_bytes", "t1"."running", "t1"."queued", "t1"."completed_ms" FROM "quotaregistrysize" AS "t1" LIMIT %s OFFSET %s', [1, 0]) quotaregistrysizeworker stdout | 2025-02-14 01:57:46,018 [78] [DEBUG] [data.database] Disconnecting from database. quotaregistrysizeworker stdout | 2025-02-14 01:57:46,019 [78] [INFO] [apscheduler.executors.default] Job "QuotaRegistrySizeWorker._calculate_registry_size (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:46 UTC)" executed successfully queuecleanupworker stdout | 2025-02-14 01:57:47,183 [77] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'queuecleanupworker.py', 'pid': '77'} securityscanningnotificationworker stdout | 2025-02-14 01:57:48,125 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:57:48,125 [87] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:59.123196+00:00 (in 10.997583 seconds) securityscanningnotificationworker stdout | 2025-02-14 01:57:48,125 [87] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:48 UTC)" (scheduled at 2025-02-14 01:57:48.125163+00:00) securityscanningnotificationworker stdout | 2025-02-14 01:57:48,125 [87] [DEBUG] [workers.queueworker] Running watchdog. securityscanningnotificationworker stdout | 2025-02-14 01:57:48,125 [87] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:48 UTC)" executed successfully namespacegcworker stdout | 2025-02-14 01:57:50,553 [73] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'namespacegcworker.py', 'pid': '73'} teamsyncworker stdout | 2025-02-14 01:57:50,921 [92] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'teamsyncworker.py', 'pid': '92'} autopruneworker stdout | 2025-02-14 01:57:52,310 [56] [DEBUG] [apscheduler.scheduler] Looking for jobs to run autopruneworker stdout | 2025-02-14 01:57:52,310 [56] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:58:22.310342+00:00 (in 29.999564 seconds) autopruneworker stdout | 2025-02-14 01:57:52,310 [56] [INFO] [apscheduler.executors.default] Running job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:22 UTC)" (scheduled at 2025-02-14 01:57:52.310342+00:00) autopruneworker stdout | 2025-02-14 01:57:52,317 [56] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."namespace_id", "t1"."last_ran_ms", "t1"."status" FROM "autoprunetaskstatus" AS "t1" WHERE (("t1"."namespace_id" NOT IN (SELECT "t2"."id" FROM "user" AS "t2" WHERE (("t2"."enabled" = %s) AND ("t2"."id" = "t1"."namespace_id")))) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [False, 1739494672316, None, 1, 0]) autopruneworker stdout | 2025-02-14 01:57:52,321 [56] [INFO] [__main__] no autoprune tasks found, exiting... autopruneworker stdout | 2025-02-14 01:57:52,321 [56] [DEBUG] [data.database] Disconnecting from database. autopruneworker stdout | 2025-02-14 01:57:52,321 [56] [INFO] [apscheduler.executors.default] Job "AutoPruneWorker.prune (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:22 UTC)" executed successfully expiredappspecifictokenworker stdout | 2025-02-14 01:57:52,671 [62] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'expiredappspecifictokenworker.py', 'pid': '62'} manifestsubjectbackfillworker stdout | 2025-02-14 01:57:52,900 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:57:52,901 [71] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:58:05.898886+00:00 (in 12.997820 seconds) manifestsubjectbackfillworker stdout | 2025-02-14 01:57:52,901 [71] [INFO] [apscheduler.executors.default] Running job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:52 UTC)" (scheduled at 2025-02-14 01:57:52.900596+00:00) manifestsubjectbackfillworker stdout | 2025-02-14 01:57:52,901 [71] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."repository_id", "t1"."digest", "t1"."media_type_id", "t1"."manifest_bytes", "t1"."config_media_type", "t1"."layers_compressed_size", "t1"."subject", "t1"."subject_backfilled", "t1"."artifact_type", "t1"."artifact_type_backfilled" FROM "manifest" AS "t1" WHERE (("t1"."artifact_type_backfilled" = %s) OR ("t1"."artifact_type_backfilled" IS %s)) LIMIT %s OFFSET %s', [False, None, 1, 0]) manifestsubjectbackfillworker stdout | 2025-02-14 01:57:52,910 [71] [DEBUG] [__main__] Manifest artifact_type backfill worker has completed; skipping manifestsubjectbackfillworker stdout | 2025-02-14 01:57:52,910 [71] [DEBUG] [data.database] Disconnecting from database. manifestsubjectbackfillworker stdout | 2025-02-14 01:57:52,910 [71] [INFO] [apscheduler.executors.default] Job "ManifestSubjectBackfillWorker._backfill_manifest_artifact_type (trigger: interval[0:01:00], next run at: 2025-02-14 01:58:52 UTC)" executed successfully buildlogsarchiver stdout | 2025-02-14 01:57:53,583 [59] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'buildlogsarchiver.py', 'pid': '59'} securityworker stdout | 2025-02-14 01:57:54,231 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:57:54,231 [88] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:59.232325+00:00 (in 5.000717 seconds) securityworker stdout | 2025-02-14 01:57:54,231 [88] [INFO] [apscheduler.executors.default] Running job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:24 UTC)" (scheduled at 2025-02-14 01:57:54.231161+00:00) securityworker stdout | 2025-02-14 01:57:54,232 [88] [DEBUG] [util.secscan.v4.api] generated jwt for security scanner request securityworker stdout | 2025-02-14 01:57:54,232 [88] [DEBUG] [util.secscan.v4.api] GETing security URL http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local/indexer/api/v1/index_state securityworker stdout | 2025-02-14 01:57:54,234 [88] [DEBUG] [urllib3.connectionpool] http://quayregistry-clair-app.quay-enterprise-15141.svc.cluster.local:80 "GET /indexer/api/v1/index_state HTTP/1.1" 200 None securityworker stdout | 2025-02-14 01:57:54,235 [88] [DEBUG] [peewee] ('SELECT Max("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:57:54,244 [88] [DEBUG] [peewee] ('SELECT Min("t1"."id") FROM "manifest" AS "t1"', []) securityworker stdout | 2025-02-14 01:57:54,246 [88] [DEBUG] [data.database] Disconnecting from database. securityworker stdout | 2025-02-14 01:57:54,246 [88] [INFO] [apscheduler.executors.default] Job "SecurityWorker._index_in_scanner (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:57:54,390 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:57:54,390 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:57:55.392556+00:00 (in 1.001673 seconds) gcworker stdout | 2025-02-14 01:57:54,391 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:24 UTC)" (scheduled at 2025-02-14 01:57:54.390410+00:00) gcworker stdout | 2025-02-14 01:57:54,391 [64] [DEBUG] [__main__] No GC policies found gcworker stdout | 2025-02-14 01:57:54,391 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:24 UTC)" executed successfully gcworker stdout | 2025-02-14 01:57:55,392 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:57:55,392 [64] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:58:24.390410+00:00 (in 28.997422 seconds) gcworker stdout | 2025-02-14 01:57:55,393 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:25 UTC)" (scheduled at 2025-02-14 01:57:55.392556+00:00) gcworker stdout | 2025-02-14 01:57:55,393 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."name" FROM "externalnotificationevent" AS "t1" WHERE ("t1"."name" = %s) LIMIT %s OFFSET %s', ['repo_image_expiry', 1, 0]) gcworker stdout | 2025-02-14 01:57:55,402 [64] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."uuid", "t1"."method_id", "t1"."repository_id", "t1"."event_config_json" FROM "repositorynotification" AS "t1" WHERE ((("t1"."event_id" = %s) AND ("t1"."number_of_failures" < %s)) AND (("t1"."last_ran_ms" < %s) OR ("t1"."last_ran_ms" IS %s))) ORDER BY "t1"."last_ran_ms" ASC NULLS first LIMIT %s OFFSET %s FOR UPDATE SKIP LOCKED', [11, 3, 1739497975401, None, 1, 0]) gcworker stdout | 2025-02-14 01:57:55,405 [64] [DEBUG] [data.database] Disconnecting from database. gcworker stdout | 2025-02-14 01:57:55,405 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._scan_notifications (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:25 UTC)" executed successfully storagereplication stdout | 2025-02-14 01:57:55,405 [90] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'storagereplication.py', 'pid': '90'} notificationworker stdout | 2025-02-14 01:57:55,803 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:57:55,804 [75] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:58:05.803718+00:00 (in 9.999532 seconds) notificationworker stdout | 2025-02-14 01:57:55,804 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:58:05 UTC)" (scheduled at 2025-02-14 01:57:55.803718+00:00) notificationworker stdout | 2025-02-14 01:57:55,804 [75] [DEBUG] [workers.queueworker] Getting work item from queue. notificationworker stdout | 2025-02-14 01:57:55,805 [75] [DEBUG] [peewee] ('SELECT "t1"."id", "t1"."queue_name", "t1"."body", "t1"."available_after", "t1"."available", "t1"."processing_expires", "t1"."retries_remaining", "t1"."state_id" FROM "queueitem" AS "t1" INNER JOIN (SELECT "t1"."id" FROM "queueitem" AS "t1" WHERE (((("t1"."available_after" <= %s) AND (("t1"."available" = %s) OR ("t1"."processing_expires" <= %s))) AND ("t1"."retries_remaining" > %s)) AND ("t1"."queue_name" ILIKE %s)) LIMIT %s) AS "j1" ON ("t1"."id" = "j1"."id") ORDER BY Random() LIMIT %s OFFSET %s', [datetime.datetime(2025, 2, 14, 1, 57, 55, 804394), True, datetime.datetime(2025, 2, 14, 1, 57, 55, 804394), 0, 'notification/%', 50, 1, 0]) notificationworker stdout | 2025-02-14 01:57:55,814 [75] [DEBUG] [workers.queueworker] No more work. notificationworker stdout | 2025-02-14 01:57:55,814 [75] [DEBUG] [data.database] Disconnecting from database. notificationworker stdout | 2025-02-14 01:57:55,814 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2025-02-14 01:58:05 UTC)" executed successfully notificationworker stdout | 2025-02-14 01:57:55,825 [75] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'notificationworker.py', 'pid': '75'} manifestbackfillworker stdout | 2025-02-14 01:57:56,628 [70] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'manifestbackfillworker.py', 'pid': '70'} 2025-02-14 01:57:57,031 WARN received SIGTERM indicating exit request 2025-02-14 01:57:57,031 INFO waiting for stdout, autopruneworker, blobuploadcleanupworker, builder, buildlogsarchiver, chunkcleanupworker, dnsmasq, expiredappspecifictokenworker, exportactionlogsworker, gcworker, globalpromstats, gunicorn-registry, gunicorn-secscan, gunicorn-web, logrotateworker, manifestbackfillworker, manifestsubjectbackfillworker, memcache, namespacegcworker, nginx, notificationworker, pushgateway, queuecleanupworker, quotaregistrysizeworker, quotatotalworker, reconciliationworker, repositoryactioncounter, repositorygcworker, securityscanningnotificationworker, securityworker, servicekey, storagereplication, teamsyncworker to die globalpromstats stdout | 2025-02-14 01:57:57,031 [65] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'globalpromstats.py', 'pid': '65'} 2025-02-14 01:57:57,036 WARN stopped: teamsyncworker (terminated by SIGTERM) 2025-02-14 01:57:57,040 WARN stopped: storagereplication (terminated by SIGTERM) servicekey stdout | 2025-02-14 01:57:57,041 [89] [DEBUG] [workers.worker] Shutting down worker. servicekey stdout | 2025-02-14 01:57:57,041 [89] [DEBUG] [workers.worker] Waiting for running tasks to complete. servicekey stdout | 2025-02-14 01:57:57,041 [89] [INFO] [apscheduler.scheduler] Scheduler has been shut down servicekey stdout | 2025-02-14 01:57:57,041 [89] [DEBUG] [apscheduler.scheduler] Looking for jobs to run servicekey stdout | 2025-02-14 01:57:57,041 [89] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added servicekey stdout | 2025-02-14 01:57:57,041 [89] [DEBUG] [workers.worker] Finished. 2025-02-14 01:57:57,194 INFO stopped: servicekey (exit status 0) securityworker stdout | 2025-02-14 01:57:57,195 [88] [DEBUG] [workers.worker] Shutting down worker. securityworker stdout | 2025-02-14 01:57:57,195 [88] [DEBUG] [workers.worker] Waiting for running tasks to complete. securityworker stdout | 2025-02-14 01:57:57,195 [88] [INFO] [apscheduler.scheduler] Scheduler has been shut down securityworker stdout | 2025-02-14 01:57:57,195 [88] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityworker stdout | 2025-02-14 01:57:57,195 [88] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added securityworker stdout | 2025-02-14 01:57:57,195 [88] [DEBUG] [workers.worker] Finished. 2025-02-14 01:57:57,383 INFO stopped: securityworker (exit status 0) securityscanningnotificationworker stdout | 2025-02-14 01:57:57,383 [87] [DEBUG] [workers.worker] Shutting down worker. securityscanningnotificationworker stdout | 2025-02-14 01:57:57,383 [87] [DEBUG] [workers.worker] Waiting for running tasks to complete. securityscanningnotificationworker stdout | 2025-02-14 01:57:57,384 [87] [INFO] [apscheduler.scheduler] Scheduler has been shut down securityscanningnotificationworker stdout | 2025-02-14 01:57:57,384 [87] [DEBUG] [apscheduler.scheduler] Looking for jobs to run securityscanningnotificationworker stdout | 2025-02-14 01:57:57,384 [87] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added securityscanningnotificationworker stdout | 2025-02-14 01:57:57,384 [87] [DEBUG] [workers.worker] Finished. builder stdout | 2025-02-14 01:57:57,386 [58] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'builder.py', 'pid': '58'} 2025-02-14 01:57:57,532 INFO stopped: securityscanningnotificationworker (exit status 0) repositorygcworker stdout | 2025-02-14 01:57:57,532 [85] [DEBUG] [workers.worker] Shutting down worker. repositorygcworker stdout | 2025-02-14 01:57:57,532 [85] [DEBUG] [workers.worker] Waiting for running tasks to complete. repositorygcworker stdout | 2025-02-14 01:57:57,532 [85] [INFO] [apscheduler.scheduler] Scheduler has been shut down repositorygcworker stdout | 2025-02-14 01:57:57,533 [85] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositorygcworker stdout | 2025-02-14 01:57:57,533 [85] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added repositorygcworker stdout | 2025-02-14 01:57:57,533 [85] [DEBUG] [workers.worker] Finished. 2025-02-14 01:57:57,686 INFO stopped: repositorygcworker (exit status 0) repositoryactioncounter stdout | 2025-02-14 01:57:57,686 [81] [DEBUG] [workers.worker] Shutting down worker. repositoryactioncounter stdout | 2025-02-14 01:57:57,687 [81] [DEBUG] [workers.worker] Waiting for running tasks to complete. repositoryactioncounter stdout | 2025-02-14 01:57:57,687 [81] [INFO] [apscheduler.scheduler] Scheduler has been shut down repositoryactioncounter stdout | 2025-02-14 01:57:57,687 [81] [DEBUG] [apscheduler.scheduler] Looking for jobs to run repositoryactioncounter stdout | 2025-02-14 01:57:57,687 [81] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added repositoryactioncounter stdout | 2025-02-14 01:57:57,687 [81] [DEBUG] [workers.worker] Finished. 2025-02-14 01:57:57,834 INFO stopped: repositoryactioncounter (exit status 0) logrotateworker stdout | 2025-02-14 01:57:58,012 [69] [DEBUG] [util.metrics.prometheus] pushed registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'logrotateworker.py', 'pid': '69'} 2025-02-14 01:57:58,017 WARN stopped: reconciliationworker (terminated by SIGTERM) 2025-02-14 01:57:58,021 WARN stopped: quotatotalworker (terminated by SIGTERM) quotaregistrysizeworker stdout | 2025-02-14 01:57:58,022 [78] [DEBUG] [workers.worker] Shutting down worker. quotaregistrysizeworker stdout | 2025-02-14 01:57:58,022 [78] [DEBUG] [workers.worker] Waiting for running tasks to complete. quotaregistrysizeworker stdout | 2025-02-14 01:57:58,022 [78] [INFO] [apscheduler.scheduler] Scheduler has been shut down quotaregistrysizeworker stdout | 2025-02-14 01:57:58,022 [78] [DEBUG] [apscheduler.scheduler] Looking for jobs to run quotaregistrysizeworker stdout | 2025-02-14 01:57:58,022 [78] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added quotaregistrysizeworker stdout | 2025-02-14 01:57:58,022 [78] [DEBUG] [workers.worker] Finished. 2025-02-14 01:57:58,188 INFO stopped: quotaregistrysizeworker (exit status 0) queuecleanupworker stdout | 2025-02-14 01:57:58,189 [77] [DEBUG] [workers.worker] Shutting down worker. queuecleanupworker stdout | 2025-02-14 01:57:58,189 [77] [DEBUG] [workers.worker] Waiting for running tasks to complete. queuecleanupworker stdout | 2025-02-14 01:57:58,189 [77] [INFO] [apscheduler.scheduler] Scheduler has been shut down queuecleanupworker stdout | 2025-02-14 01:57:58,189 [77] [DEBUG] [apscheduler.scheduler] Looking for jobs to run queuecleanupworker stdout | 2025-02-14 01:57:58,189 [77] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added queuecleanupworker stdout | 2025-02-14 01:57:58,189 [77] [DEBUG] [workers.worker] Finished. 2025-02-14 01:57:58,354 INFO stopped: queuecleanupworker (exit status 0) pushgateway stderr | ts=2025-02-14T01:57:58.354Z caller=main.go:272 level=info msg="received SIGINT/SIGTERM; exiting gracefully..." pushgateway stderr | ts=2025-02-14T01:57:58.354Z caller=main.go:198 level=info msg="HTTP server stopped" 2025-02-14 01:57:58,355 INFO stopped: pushgateway (exit status 0) notificationworker stdout | 2025-02-14 01:57:58,356 [75] [DEBUG] [workers.worker] Shutting down worker. notificationworker stdout | 2025-02-14 01:57:58,356 [75] [DEBUG] [workers.worker] Waiting for running tasks to complete. notificationworker stdout | 2025-02-14 01:57:58,356 [75] [INFO] [apscheduler.scheduler] Scheduler has been shut down notificationworker stdout | 2025-02-14 01:57:58,356 [75] [DEBUG] [apscheduler.scheduler] Looking for jobs to run notificationworker stdout | 2025-02-14 01:57:58,356 [75] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added notificationworker stdout | 2025-02-14 01:57:58,356 [75] [DEBUG] [workers.worker] Finished. blobuploadcleanupworker stdout | 2025-02-14 01:57:58,474 [57] [ERROR] [util.metrics.prometheus] failed to push registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'blobuploadcleanupworker.py', 'pid': '57'} blobuploadcleanupworker stdout | Traceback (most recent call last): blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1346, in do_open blobuploadcleanupworker stdout | h.request(req.get_method(), req.selector, req.data, headers, blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1285, in request blobuploadcleanupworker stdout | self._send_request(method, url, body, headers, encode_chunked) blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1331, in _send_request blobuploadcleanupworker stdout | self.endheaders(body, encode_chunked=encode_chunked) blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1280, in endheaders blobuploadcleanupworker stdout | self._send_output(message_body, encode_chunked=encode_chunked) blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1040, in _send_output blobuploadcleanupworker stdout | self.send(msg) blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 980, in send blobuploadcleanupworker stdout | self.connect() blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 946, in connect blobuploadcleanupworker stdout | self.sock = self._create_connection( blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/socket.py", line 856, in create_connection blobuploadcleanupworker stdout | raise err blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/socket.py", line 844, in create_connection blobuploadcleanupworker stdout | sock.connect(sa) blobuploadcleanupworker stdout | ConnectionRefusedError: [Errno 111] Connection refused blobuploadcleanupworker stdout | During handling of the above exception, another exception occurred: blobuploadcleanupworker stdout | Traceback (most recent call last): blobuploadcleanupworker stdout | File "/quay-registry/util/metrics/prometheus.py", line 140, in run blobuploadcleanupworker stdout | push_to_gateway( blobuploadcleanupworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 289, in push_to_gateway blobuploadcleanupworker stdout | _use_gateway('PUT', gateway, job, registry, grouping_key, timeout, handler) blobuploadcleanupworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 358, in _use_gateway blobuploadcleanupworker stdout | handler( blobuploadcleanupworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 221, in handle blobuploadcleanupworker stdout | resp = build_opener(HTTPHandler).open(request, timeout=timeout) blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 517, in open blobuploadcleanupworker stdout | response = self._open(req, data) blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 534, in _open blobuploadcleanupworker stdout | result = self._call_chain(self.handle_open, protocol, protocol + blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 494, in _call_chain blobuploadcleanupworker stdout | result = func(*args) blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1375, in http_open blobuploadcleanupworker stdout | return self.do_open(http.client.HTTPConnection, req) blobuploadcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1349, in do_open blobuploadcleanupworker stdout | raise URLError(err) blobuploadcleanupworker stdout | urllib.error.URLError: 2025-02-14 01:57:58,503 INFO stopped: notificationworker (exit status 0) 2025-02-14 01:57:58,532 INFO stopped: nginx (exit status 0) namespacegcworker stdout | 2025-02-14 01:57:58,532 [73] [DEBUG] [workers.worker] Shutting down worker. namespacegcworker stdout | 2025-02-14 01:57:58,533 [73] [DEBUG] [workers.worker] Waiting for running tasks to complete. namespacegcworker stdout | 2025-02-14 01:57:58,533 [73] [INFO] [apscheduler.scheduler] Scheduler has been shut down namespacegcworker stdout | 2025-02-14 01:57:58,533 [73] [DEBUG] [apscheduler.scheduler] Looking for jobs to run namespacegcworker stdout | 2025-02-14 01:57:58,533 [73] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added namespacegcworker stdout | 2025-02-14 01:57:58,533 [73] [DEBUG] [workers.worker] Finished. autopruneworker stdout | 2025-02-14 01:57:58,653 [56] [ERROR] [util.metrics.prometheus] failed to push registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'autopruneworker.py', 'pid': '56'} autopruneworker stdout | Traceback (most recent call last): autopruneworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1346, in do_open autopruneworker stdout | h.request(req.get_method(), req.selector, req.data, headers, autopruneworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1285, in request autopruneworker stdout | self._send_request(method, url, body, headers, encode_chunked) autopruneworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1331, in _send_request autopruneworker stdout | self.endheaders(body, encode_chunked=encode_chunked) autopruneworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1280, in endheaders autopruneworker stdout | self._send_output(message_body, encode_chunked=encode_chunked) autopruneworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1040, in _send_output autopruneworker stdout | self.send(msg) autopruneworker stdout | File "/usr/lib64/python3.9/http/client.py", line 980, in send autopruneworker stdout | self.connect() autopruneworker stdout | File "/usr/lib64/python3.9/http/client.py", line 946, in connect autopruneworker stdout | self.sock = self._create_connection( autopruneworker stdout | File "/usr/lib64/python3.9/socket.py", line 856, in create_connection autopruneworker stdout | raise err autopruneworker stdout | File "/usr/lib64/python3.9/socket.py", line 844, in create_connection autopruneworker stdout | sock.connect(sa) autopruneworker stdout | ConnectionRefusedError: [Errno 111] Connection refused autopruneworker stdout | During handling of the above exception, another exception occurred: autopruneworker stdout | Traceback (most recent call last): autopruneworker stdout | File "/quay-registry/util/metrics/prometheus.py", line 140, in run autopruneworker stdout | push_to_gateway( autopruneworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 289, in push_to_gateway autopruneworker stdout | _use_gateway('PUT', gateway, job, registry, grouping_key, timeout, handler) autopruneworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 358, in _use_gateway autopruneworker stdout | handler( autopruneworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 221, in handle autopruneworker stdout | resp = build_opener(HTTPHandler).open(request, timeout=timeout) autopruneworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 517, in open autopruneworker stdout | response = self._open(req, data) autopruneworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 534, in _open autopruneworker stdout | result = self._call_chain(self.handle_open, protocol, protocol + autopruneworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 494, in _call_chain autopruneworker stdout | result = func(*args) autopruneworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1375, in http_open autopruneworker stdout | return self.do_open(http.client.HTTPConnection, req) autopruneworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1349, in do_open autopruneworker stdout | raise URLError(err) autopruneworker stdout | urllib.error.URLError: 2025-02-14 01:57:58,693 INFO stopped: namespacegcworker (exit status 0) memcache stderr | Exiting normally 2025-02-14 01:58:00,062 INFO waiting for stdout, autopruneworker, blobuploadcleanupworker, builder, buildlogsarchiver, chunkcleanupworker, dnsmasq, expiredappspecifictokenworker, exportactionlogsworker, gcworker, globalpromstats, gunicorn-registry, gunicorn-secscan, gunicorn-web, logrotateworker, manifestbackfillworker, manifestsubjectbackfillworker, memcache to die gcworker stdout | 2025-02-14 01:58:00,060 [64] [ERROR] [util.metrics.prometheus] failed to push registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'gcworker.py', 'pid': '64'} gcworker stdout | Traceback (most recent call last): gcworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1346, in do_open gcworker stdout | h.request(req.get_method(), req.selector, req.data, headers, gcworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1285, in request gcworker stdout | self._send_request(method, url, body, headers, encode_chunked) gcworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1331, in _send_request gcworker stdout | self.endheaders(body, encode_chunked=encode_chunked) gcworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1280, in endheaders gcworker stdout | self._send_output(message_body, encode_chunked=encode_chunked) gcworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1040, in _send_output gcworker stdout | self.send(msg) gcworker stdout | File "/usr/lib64/python3.9/http/client.py", line 980, in send gcworker stdout | self.connect() gcworker stdout | File "/usr/lib64/python3.9/http/client.py", line 946, in connect gcworker stdout | self.sock = self._create_connection( gcworker stdout | File "/usr/lib64/python3.9/socket.py", line 856, in create_connection gcworker stdout | raise err gcworker stdout | File "/usr/lib64/python3.9/socket.py", line 844, in create_connection gcworker stdout | sock.connect(sa) gcworker stdout | ConnectionRefusedError: [Errno 111] Connection refused gcworker stdout | During handling of the above exception, another exception occurred: gcworker stdout | Traceback (most recent call last): gcworker stdout | File "/quay-registry/util/metrics/prometheus.py", line 140, in run gcworker stdout | push_to_gateway( gcworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 289, in push_to_gateway gcworker stdout | _use_gateway('PUT', gateway, job, registry, grouping_key, timeout, handler) gcworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 358, in _use_gateway gcworker stdout | handler( gcworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 221, in handle gcworker stdout | resp = build_opener(HTTPHandler).open(request, timeout=timeout) gcworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 517, in open gcworker stdout | response = self._open(req, data) gcworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 534, in _open gcworker stdout | result = self._call_chain(self.handle_open, protocol, protocol + gcworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 494, in _call_chain gcworker stdout | result = func(*args) gcworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1375, in http_open gcworker stdout | return self.do_open(http.client.HTTPConnection, req) gcworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1349, in do_open gcworker stdout | raise URLError(err) gcworker stdout | urllib.error.URLError: 2025-02-14 01:58:00,635 INFO stopped: memcache (exit status 0) memcache stdout | Signal handled: Terminated. manifestsubjectbackfillworker stdout | 2025-02-14 01:58:00,636 [71] [DEBUG] [workers.worker] Shutting down worker. manifestsubjectbackfillworker stdout | 2025-02-14 01:58:00,636 [71] [DEBUG] [workers.worker] Waiting for running tasks to complete. manifestsubjectbackfillworker stdout | 2025-02-14 01:58:00,636 [71] [INFO] [apscheduler.scheduler] Scheduler has been shut down manifestsubjectbackfillworker stdout | 2025-02-14 01:58:00,636 [71] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestsubjectbackfillworker stdout | 2025-02-14 01:58:00,636 [71] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added manifestsubjectbackfillworker stdout | 2025-02-14 01:58:00,636 [71] [DEBUG] [workers.worker] Finished. 2025-02-14 01:58:00,787 INFO stopped: manifestsubjectbackfillworker (exit status 0) manifestbackfillworker stdout | 2025-02-14 01:58:00,787 [70] [DEBUG] [workers.worker] Shutting down worker. manifestbackfillworker stdout | 2025-02-14 01:58:00,787 [70] [DEBUG] [workers.worker] Waiting for running tasks to complete. manifestbackfillworker stdout | 2025-02-14 01:58:00,787 [70] [INFO] [apscheduler.scheduler] Scheduler has been shut down manifestbackfillworker stdout | 2025-02-14 01:58:00,787 [70] [DEBUG] [apscheduler.scheduler] Looking for jobs to run manifestbackfillworker stdout | 2025-02-14 01:58:00,787 [70] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added manifestbackfillworker stdout | 2025-02-14 01:58:00,787 [70] [DEBUG] [workers.worker] Finished. 2025-02-14 01:58:00,937 INFO stopped: manifestbackfillworker (exit status 0) 2025-02-14 01:58:00,942 WARN stopped: logrotateworker (terminated by SIGTERM) gunicorn-web stdout | 2025-02-14 01:58:01,246 [68] [ERROR] [gunicorn.error] Worker (pid:135) exited with code 1 gunicorn-web stdout | 2025-02-14 01:58:01,246 [68] [ERROR] [gunicorn.error] Worker (pid:135) exited with code 1. gunicorn-web stdout | 2025-02-14 01:58:01,383 [243] [ERROR] [util.metrics.prometheus] failed to push registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '243'} gunicorn-web stdout | Traceback (most recent call last): gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1346, in do_open gunicorn-web stdout | h.request(req.get_method(), req.selector, req.data, headers, gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1285, in request gunicorn-web stdout | self._send_request(method, url, body, headers, encode_chunked) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1331, in _send_request gunicorn-web stdout | self.endheaders(body, encode_chunked=encode_chunked) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1280, in endheaders gunicorn-web stdout | self._send_output(message_body, encode_chunked=encode_chunked) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1040, in _send_output gunicorn-web stdout | self.send(msg) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 980, in send gunicorn-web stdout | self.connect() gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 946, in connect gunicorn-web stdout | self.sock = self._create_connection( gunicorn-web stdout | File "/app/lib/python3.9/site-packages/gevent/socket.py", line 115, in create_connection gunicorn-web stdout | sock.connect(sa) gunicorn-web stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 590, in connect gunicorn-web stdout | self._internal_connect(address) gunicorn-web stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 634, in _internal_connect gunicorn-web stdout | raise _SocketError(err, strerror(err)) gunicorn-web stdout | ConnectionRefusedError: [Errno 111] Connection refused gunicorn-web stdout | During handling of the above exception, another exception occurred: gunicorn-web stdout | Traceback (most recent call last): gunicorn-web stdout | File "/quay-registry/util/metrics/prometheus.py", line 140, in run gunicorn-web stdout | push_to_gateway( gunicorn-web stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 289, in push_to_gateway gunicorn-web stdout | _use_gateway('PUT', gateway, job, registry, grouping_key, timeout, handler) gunicorn-web stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 358, in _use_gateway gunicorn-web stdout | handler( gunicorn-web stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 221, in handle gunicorn-web stdout | resp = build_opener(HTTPHandler).open(request, timeout=timeout) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 517, in open gunicorn-web stdout | response = self._open(req, data) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 534, in _open gunicorn-web stdout | result = self._call_chain(self.handle_open, protocol, protocol + gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 494, in _call_chain gunicorn-web stdout | result = func(*args) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1375, in http_open gunicorn-web stdout | return self.do_open(http.client.HTTPConnection, req) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1349, in do_open gunicorn-web stdout | raise URLError(err) gunicorn-web stdout | urllib.error.URLError: gunicorn-web stdout | 2025-02-14 01:58:01,389 [242] [ERROR] [util.metrics.prometheus] failed to push registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '242'} gunicorn-web stdout | Traceback (most recent call last): gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1346, in do_open gunicorn-web stdout | h.request(req.get_method(), req.selector, req.data, headers, gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1285, in request gunicorn-web stdout | self._send_request(method, url, body, headers, encode_chunked) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1331, in _send_request gunicorn-web stdout | self.endheaders(body, encode_chunked=encode_chunked) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1280, in endheaders gunicorn-web stdout | self._send_output(message_body, encode_chunked=encode_chunked) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1040, in _send_output gunicorn-web stdout | self.send(msg) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 980, in send gunicorn-web stdout | self.connect() gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 946, in connect gunicorn-web stdout | self.sock = self._create_connection( gunicorn-web stdout | File "/app/lib/python3.9/site-packages/gevent/socket.py", line 115, in create_connection gunicorn-web stdout | sock.connect(sa) gunicorn-web stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 590, in connect gunicorn-web stdout | self._internal_connect(address) gunicorn-web stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 634, in _internal_connect gunicorn-web stdout | raise _SocketError(err, strerror(err)) gunicorn-web stdout | ConnectionRefusedError: [Errno 111] Connection refused gunicorn-web stdout | During handling of the above exception, another exception occurred: gunicorn-web stdout | Traceback (most recent call last): gunicorn-web stdout | File "/quay-registry/util/metrics/prometheus.py", line 140, in run gunicorn-web stdout | push_to_gateway( gunicorn-web stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 289, in push_to_gateway gunicorn-web stdout | _use_gateway('PUT', gateway, job, registry, grouping_key, timeout, handler) gunicorn-web stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 358, in _use_gateway gunicorn-web stdout | handler( gunicorn-web stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 221, in handle gunicorn-web stdout | resp = build_opener(HTTPHandler).open(request, timeout=timeout) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 517, in open gunicorn-web stdout | response = self._open(req, data) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 534, in _open gunicorn-web stdout | result = self._call_chain(self.handle_open, protocol, protocol + gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 494, in _call_chain gunicorn-web stdout | result = func(*args) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1375, in http_open gunicorn-web stdout | return self.do_open(http.client.HTTPConnection, req) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1349, in do_open gunicorn-web stdout | raise URLError(err) gunicorn-web stdout | urllib.error.URLError: gunicorn-web stdout | 2025-02-14 01:58:01,391 [68] [ERROR] [util.metrics.prometheus] failed to push registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'web:application', 'pid': '68'} gunicorn-web stdout | Traceback (most recent call last): gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1346, in do_open gunicorn-web stdout | h.request(req.get_method(), req.selector, req.data, headers, gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1285, in request gunicorn-web stdout | self._send_request(method, url, body, headers, encode_chunked) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1331, in _send_request gunicorn-web stdout | self.endheaders(body, encode_chunked=encode_chunked) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1280, in endheaders gunicorn-web stdout | self._send_output(message_body, encode_chunked=encode_chunked) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 1040, in _send_output gunicorn-web stdout | self.send(msg) gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 980, in send gunicorn-web stdout | self.connect() gunicorn-web stdout | File "/usr/lib64/python3.9/http/client.py", line 946, in connect gunicorn-web stdout | self.sock = self._create_connection( gunicorn-web stdout | File "/app/lib/python3.9/site-packages/gevent/socket.py", line 115, in create_connection gunicorn-web stdout | sock.connect(sa) gunicorn-web stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 590, in connect gunicorn-web stdout | self._internal_connect(address) gunicorn-web stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 634, in _internal_connect gunicorn-web stdout | raise _SocketError(err, strerror(err)) gunicorn-web stdout | ConnectionRefusedError: [Errno 111] Connection refused gunicorn-web stdout | During handling of the above exception, another exception occurred: gunicorn-web stdout | Traceback (most recent call last): gunicorn-web stdout | File "/quay-registry/util/metrics/prometheus.py", line 140, in run gunicorn-web stdout | push_to_gateway( gunicorn-web stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 289, in push_to_gateway gunicorn-web stdout | _use_gateway('PUT', gateway, job, registry, grouping_key, timeout, handler) gunicorn-web stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 358, in _use_gateway gunicorn-web stdout | handler( gunicorn-web stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 221, in handle gunicorn-web stdout | resp = build_opener(HTTPHandler).open(request, timeout=timeout) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 517, in open gunicorn-web stdout | response = self._open(req, data) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 534, in _open gunicorn-web stdout | result = self._call_chain(self.handle_open, protocol, protocol + gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 494, in _call_chain gunicorn-web stdout | result = func(*args) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1375, in http_open gunicorn-web stdout | return self.do_open(http.client.HTTPConnection, req) gunicorn-web stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1349, in do_open gunicorn-web stdout | raise URLError(err) gunicorn-web stdout | urllib.error.URLError: chunkcleanupworker stdout | 2025-02-14 01:58:01,780 [60] [ERROR] [util.metrics.prometheus] failed to push registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'chunkcleanupworker.py', 'pid': '60'} chunkcleanupworker stdout | Traceback (most recent call last): chunkcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1346, in do_open chunkcleanupworker stdout | h.request(req.get_method(), req.selector, req.data, headers, chunkcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1285, in request chunkcleanupworker stdout | self._send_request(method, url, body, headers, encode_chunked) chunkcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1331, in _send_request chunkcleanupworker stdout | self.endheaders(body, encode_chunked=encode_chunked) chunkcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1280, in endheaders chunkcleanupworker stdout | self._send_output(message_body, encode_chunked=encode_chunked) chunkcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 1040, in _send_output chunkcleanupworker stdout | self.send(msg) chunkcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 980, in send chunkcleanupworker stdout | self.connect() chunkcleanupworker stdout | File "/usr/lib64/python3.9/http/client.py", line 946, in connect chunkcleanupworker stdout | self.sock = self._create_connection( chunkcleanupworker stdout | File "/usr/lib64/python3.9/socket.py", line 856, in create_connection chunkcleanupworker stdout | raise err chunkcleanupworker stdout | File "/usr/lib64/python3.9/socket.py", line 844, in create_connection chunkcleanupworker stdout | sock.connect(sa) chunkcleanupworker stdout | ConnectionRefusedError: [Errno 111] Connection refused chunkcleanupworker stdout | During handling of the above exception, another exception occurred: chunkcleanupworker stdout | Traceback (most recent call last): chunkcleanupworker stdout | File "/quay-registry/util/metrics/prometheus.py", line 140, in run chunkcleanupworker stdout | push_to_gateway( chunkcleanupworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 289, in push_to_gateway chunkcleanupworker stdout | _use_gateway('PUT', gateway, job, registry, grouping_key, timeout, handler) chunkcleanupworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 358, in _use_gateway chunkcleanupworker stdout | handler( chunkcleanupworker stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 221, in handle chunkcleanupworker stdout | resp = build_opener(HTTPHandler).open(request, timeout=timeout) chunkcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 517, in open chunkcleanupworker stdout | response = self._open(req, data) chunkcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 534, in _open chunkcleanupworker stdout | result = self._call_chain(self.handle_open, protocol, protocol + chunkcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 494, in _call_chain chunkcleanupworker stdout | result = func(*args) chunkcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1375, in http_open chunkcleanupworker stdout | return self.do_open(http.client.HTTPConnection, req) chunkcleanupworker stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1349, in do_open chunkcleanupworker stdout | raise URLError(err) chunkcleanupworker stdout | urllib.error.URLError: 2025-02-14 01:58:02,476 INFO stopped: gunicorn-web (exit status 0) 2025-02-14 01:58:03,298 INFO waiting for stdout, autopruneworker, blobuploadcleanupworker, builder, buildlogsarchiver, chunkcleanupworker, dnsmasq, expiredappspecifictokenworker, exportactionlogsworker, gcworker, globalpromstats, gunicorn-registry, gunicorn-secscan to die gunicorn-secscan stdout | 2025-02-14 01:58:03,297 [67] [ERROR] [util.metrics.prometheus] failed to push registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '67'} gunicorn-secscan stdout | Traceback (most recent call last): gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1346, in do_open gunicorn-secscan stdout | h.request(req.get_method(), req.selector, req.data, headers, gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 1285, in request gunicorn-secscan stdout | self._send_request(method, url, body, headers, encode_chunked) gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 1331, in _send_request gunicorn-secscan stdout | self.endheaders(body, encode_chunked=encode_chunked) gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 1280, in endheaders gunicorn-secscan stdout | self._send_output(message_body, encode_chunked=encode_chunked) gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 1040, in _send_output gunicorn-secscan stdout | self.send(msg) gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 980, in send gunicorn-secscan stdout | self.connect() gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 946, in connect gunicorn-secscan stdout | self.sock = self._create_connection( gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/gevent/socket.py", line 115, in create_connection gunicorn-secscan stdout | sock.connect(sa) gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 590, in connect gunicorn-secscan stdout | self._internal_connect(address) gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 634, in _internal_connect gunicorn-secscan stdout | raise _SocketError(err, strerror(err)) gunicorn-secscan stdout | ConnectionRefusedError: [Errno 111] Connection refused gunicorn-secscan stdout | During handling of the above exception, another exception occurred: gunicorn-secscan stdout | Traceback (most recent call last): gunicorn-secscan stdout | File "/quay-registry/util/metrics/prometheus.py", line 140, in run gunicorn-secscan stdout | push_to_gateway( gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 289, in push_to_gateway gunicorn-secscan stdout | _use_gateway('PUT', gateway, job, registry, grouping_key, timeout, handler) gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 358, in _use_gateway gunicorn-secscan stdout | handler( gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 221, in handle gunicorn-secscan stdout | resp = build_opener(HTTPHandler).open(request, timeout=timeout) gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 517, in open gunicorn-secscan stdout | response = self._open(req, data) gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 534, in _open gunicorn-secscan stdout | result = self._call_chain(self.handle_open, protocol, protocol + gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 494, in _call_chain gunicorn-secscan stdout | result = func(*args) gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1375, in http_open gunicorn-secscan stdout | return self.do_open(http.client.HTTPConnection, req) gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1349, in do_open gunicorn-secscan stdout | raise URLError(err) gunicorn-secscan stdout | urllib.error.URLError: gunicorn-secscan stdout | 2025-02-14 01:58:03,299 [238] [ERROR] [util.metrics.prometheus] failed to push registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'secscan:application', 'pid': '238'} gunicorn-secscan stdout | Traceback (most recent call last): gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1346, in do_open gunicorn-secscan stdout | h.request(req.get_method(), req.selector, req.data, headers, gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 1285, in request gunicorn-secscan stdout | self._send_request(method, url, body, headers, encode_chunked) gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 1331, in _send_request gunicorn-secscan stdout | self.endheaders(body, encode_chunked=encode_chunked) gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 1280, in endheaders gunicorn-secscan stdout | self._send_output(message_body, encode_chunked=encode_chunked) gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 1040, in _send_output gunicorn-secscan stdout | self.send(msg) gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 980, in send gunicorn-secscan stdout | self.connect() gunicorn-secscan stdout | File "/usr/lib64/python3.9/http/client.py", line 946, in connect gunicorn-secscan stdout | self.sock = self._create_connection( gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/gevent/socket.py", line 115, in create_connection gunicorn-secscan stdout | sock.connect(sa) gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 590, in connect gunicorn-secscan stdout | self._internal_connect(address) gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 634, in _internal_connect gunicorn-secscan stdout | raise _SocketError(err, strerror(err)) gunicorn-secscan stdout | ConnectionRefusedError: [Errno 111] Connection refused gunicorn-secscan stdout | During handling of the above exception, another exception occurred: gunicorn-secscan stdout | Traceback (most recent call last): gunicorn-secscan stdout | File "/quay-registry/util/metrics/prometheus.py", line 140, in run gunicorn-secscan stdout | push_to_gateway( gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 289, in push_to_gateway gunicorn-secscan stdout | _use_gateway('PUT', gateway, job, registry, grouping_key, timeout, handler) gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 358, in _use_gateway gunicorn-secscan stdout | handler( gunicorn-secscan stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 221, in handle gunicorn-secscan stdout | resp = build_opener(HTTPHandler).open(request, timeout=timeout) gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 517, in open gunicorn-secscan stdout | response = self._open(req, data) gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 534, in _open gunicorn-secscan stdout | result = self._call_chain(self.handle_open, protocol, protocol + gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 494, in _call_chain gunicorn-secscan stdout | result = func(*args) gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1375, in http_open gunicorn-secscan stdout | return self.do_open(http.client.HTTPConnection, req) gunicorn-secscan stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1349, in do_open gunicorn-secscan stdout | raise URLError(err) gunicorn-secscan stdout | urllib.error.URLError: gunicorn-secscan stdout | 2025-02-14 01:58:03,524 [67] [ERROR] [gunicorn.error] Worker (pid:144) exited with code 1 gunicorn-secscan stdout | 2025-02-14 01:58:03,524 [67] [ERROR] [gunicorn.error] Worker (pid:144) exited with code 1. 2025-02-14 01:58:03,759 INFO stopped: gunicorn-secscan (exit status 0) buildlogsarchiver stdout | 2025-02-14 01:58:04,000 [59] [DEBUG] [apscheduler.scheduler] Looking for jobs to run buildlogsarchiver stdout | 2025-02-14 01:58:04,001 [59] [DEBUG] [apscheduler.scheduler] Next wakeup is due at 2025-02-14 01:58:34.000511+00:00 (in 29.999525 seconds) buildlogsarchiver stdout | 2025-02-14 01:58:04,001 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:34 UTC)" (scheduled at 2025-02-14 01:58:04.000511+00:00) buildlogsarchiver stdout | 2025-02-14 01:58:04,001 [59] [DEBUG] [peewee] ('SELECT "candidates"."id" FROM (SELECT "t1"."id" FROM "repositorybuild" AS "t1" WHERE ((("t1"."phase" IN (%s, %s, %s)) OR ("t1"."started" < %s)) AND ("t1"."logs_archived" = %s)) LIMIT %s) AS "candidates" ORDER BY Random() LIMIT %s OFFSET %s', ['complete', 'error', 'cancelled', datetime.datetime(2025, 1, 30, 1, 58, 4, 1183), False, 50, 1, 0]) buildlogsarchiver stdout | 2025-02-14 01:58:04,011 [59] [DEBUG] [__main__] No more builds to archive buildlogsarchiver stdout | 2025-02-14 01:58:04,011 [59] [DEBUG] [data.database] Disconnecting from database. buildlogsarchiver stdout | 2025-02-14 01:58:04,011 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2025-02-14 01:58:34 UTC)" executed successfully gunicorn-registry stdout | 2025-02-14 01:58:04,703 [66] [ERROR] [util.metrics.prometheus] failed to push registry to pushgateway at http://localhost:9091 with grouping key {'host': 'quayregistry-quay-app-5dc574b8bf-tszt7', 'process_name': 'registry:application', 'pid': '66'} gunicorn-registry stdout | Traceback (most recent call last): gunicorn-registry stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1346, in do_open gunicorn-registry stdout | h.request(req.get_method(), req.selector, req.data, headers, gunicorn-registry stdout | File "/usr/lib64/python3.9/http/client.py", line 1285, in request gunicorn-registry stdout | self._send_request(method, url, body, headers, encode_chunked) gunicorn-registry stdout | File "/usr/lib64/python3.9/http/client.py", line 1331, in _send_request gunicorn-registry stdout | self.endheaders(body, encode_chunked=encode_chunked) gunicorn-registry stdout | File "/usr/lib64/python3.9/http/client.py", line 1280, in endheaders gunicorn-registry stdout | self._send_output(message_body, encode_chunked=encode_chunked) gunicorn-registry stdout | File "/usr/lib64/python3.9/http/client.py", line 1040, in _send_output gunicorn-registry stdout | self.send(msg) gunicorn-registry stdout | File "/usr/lib64/python3.9/http/client.py", line 980, in send gunicorn-registry stdout | self.connect() gunicorn-registry stdout | File "/usr/lib64/python3.9/http/client.py", line 946, in connect gunicorn-registry stdout | self.sock = self._create_connection( gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/gevent/socket.py", line 115, in create_connection gunicorn-registry stdout | sock.connect(sa) gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 590, in connect gunicorn-registry stdout | self._internal_connect(address) gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/gevent/_socketcommon.py", line 634, in _internal_connect gunicorn-registry stdout | raise _SocketError(err, strerror(err)) gunicorn-registry stdout | ConnectionRefusedError: [Errno 111] Connection refused gunicorn-registry stdout | During handling of the above exception, another exception occurred: gunicorn-registry stdout | Traceback (most recent call last): gunicorn-registry stdout | File "/quay-registry/util/metrics/prometheus.py", line 140, in run gunicorn-registry stdout | push_to_gateway( gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 289, in push_to_gateway gunicorn-registry stdout | _use_gateway('PUT', gateway, job, registry, grouping_key, timeout, handler) gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 358, in _use_gateway gunicorn-registry stdout | handler( gunicorn-registry stdout | File "/app/lib/python3.9/site-packages/prometheus_client/exposition.py", line 221, in handle gunicorn-registry stdout | resp = build_opener(HTTPHandler).open(request, timeout=timeout) gunicorn-registry stdout | File "/usr/lib64/python3.9/urllib/request.py", line 517, in open gunicorn-registry stdout | response = self._open(req, data) gunicorn-registry stdout | File "/usr/lib64/python3.9/urllib/request.py", line 534, in _open gunicorn-registry stdout | result = self._call_chain(self.handle_open, protocol, protocol + gunicorn-registry stdout | File "/usr/lib64/python3.9/urllib/request.py", line 494, in _call_chain gunicorn-registry stdout | result = func(*args) gunicorn-registry stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1375, in http_open gunicorn-registry stdout | return self.do_open(http.client.HTTPConnection, req) gunicorn-registry stdout | File "/usr/lib64/python3.9/urllib/request.py", line 1349, in do_open gunicorn-registry stdout | raise URLError(err) gunicorn-registry stdout | urllib.error.URLError: gunicorn-registry stdout | 2025-02-14 01:58:04,708 [66] [ERROR] [gunicorn.error] Worker (pid:228) exited with code 1 gunicorn-registry stdout | 2025-02-14 01:58:04,709 [66] [ERROR] [gunicorn.error] Worker (pid:228) exited with code 1. 2025-02-14 01:58:05,797 INFO stopped: gunicorn-registry (exit status 0) globalpromstats stdout | 2025-02-14 01:58:05,797 [65] [DEBUG] [workers.worker] Shutting down worker. globalpromstats stdout | 2025-02-14 01:58:05,797 [65] [DEBUG] [workers.worker] Waiting for running tasks to complete. globalpromstats stdout | 2025-02-14 01:58:05,797 [65] [INFO] [apscheduler.scheduler] Scheduler has been shut down globalpromstats stdout | 2025-02-14 01:58:05,797 [65] [DEBUG] [apscheduler.scheduler] Looking for jobs to run globalpromstats stdout | 2025-02-14 01:58:05,797 [65] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added globalpromstats stdout | 2025-02-14 01:58:05,798 [65] [DEBUG] [workers.worker] Finished. 2025-02-14 01:58:05,976 INFO stopped: globalpromstats (exit status 0) gcworker stdout | 2025-02-14 01:58:05,976 [64] [DEBUG] [workers.worker] Shutting down worker. gcworker stdout | 2025-02-14 01:58:05,976 [64] [DEBUG] [workers.worker] Waiting for running tasks to complete. gcworker stdout | 2025-02-14 01:58:05,976 [64] [INFO] [apscheduler.scheduler] Scheduler has been shut down gcworker stdout | 2025-02-14 01:58:05,976 [64] [DEBUG] [apscheduler.scheduler] Looking for jobs to run gcworker stdout | 2025-02-14 01:58:05,976 [64] [DEBUG] [apscheduler.scheduler] No jobs; waiting until a job is added gcworker stdout | 2025-02-14 01:58:05,977 [64] [DEBUG] [workers.worker] Finished.