Details
-
Bug
-
Resolution: Done
-
Blocker
-
quay-v3.7.0
Description
Description:
This is an issue found when deploy Quay with Quay Operator, when choose to use unmanaged component 'clairpostgres', and provide custom clair-config.yaml, the results is Clair APP POD was failed to start, get error message "failed to initialize indexer: failed to register configured scanners: failed getting id for scanner"
Quay Image: quay-operator-bundle-container-v3.7.0-73
clair-config.yaml
indexer: connstring: host=quay370.postgres.database.azure.com port=5432 dbname=postgres user=quay370@quay370 password=Welcome123!@ sslmode=disable matcher: connstring: host=quay370.postgres.database.azure.com port=5432 dbname=postgres user=quay370@quay370 password=Welcome123!@ sslmode=disable notifier: connstring: host=quay370.postgres.database.azure.com port=5432 dbname=postgres user=quay370@quay370 password=Welcome123!@ sslmode=disable updaters: sets: - rhel - suse log_level: debug
oc get pod NAME READY STATUS RESTARTS AGE quay-operator.v3.7.0-9fd6b99ff-zs25c 1/1 Running 0 137m quay370-clair-app-5ff4db569f-6kdnz 0/1 CrashLoopBackOff 7 (103s ago) 12m quay370-clair-app-5ff4db569f-6qq5g 0/1 CrashLoopBackOff 7 (119s ago) 12m quay370-quay-app-6875568f4d-hn2n6 1/1 Running 0 12m quay370-quay-app-6875568f4d-kjhxz 1/1 Running 0 12m quay370-quay-app-upgrade-q7xx9 0/1 Completed 0 12m quay370-quay-config-editor-5fb4dcc69c-74t4k 1/1 Running 0 12m quay370-quay-database-799865c7f5-555b6 1/1 Running 0 12m quay370-quay-mirror-7965f746c5-cgskh 1/1 Running 0 12m quay370-quay-mirror-7965f746c5-qjr49 1/1 Running 0 12m quay370-quay-redis-8466c4c775-82zp4 1/1 Running 0 12m oc get pod quay370-clair-app-5ff4db569f-6kdnz -o json | jq '.spec.containers[0].image' "registry.redhat.io/quay/clair-rhel8@sha256:f3b0cb4cd05ce9b6308754fae4bbd1c036ad37646cf71209c55377458d911a27" oc logs -f quay370-clair-app-5ff4db569f-6kdnz {"level":"debug","component":"initialize/Logging","time":"2022-04-14T03:22:44Z","message":"logging initialized"} {"level":"info","component":"main","version":"v4.4.1","time":"2022-04-14T03:22:44Z","message":"starting"} {"level":"info","component":"main","lint":"introspection address not provided, default will be used (at $.introspection_addr)","time":"2022-04-14T03:22:44Z"} {"level":"info","component":"main","lint":"automatically sizing number of concurrent requests (at $.indexer.index_report_request_concurrency)","time":"2022-04-14T03:22:44Z"} {"level":"info","component":"main","lint":"no delivery mechanisms specified (at $.notifier)","time":"2022-04-14T03:22:44Z"} {"level":"debug","component":"main","time":"2022-04-14T03:22:44Z","message":"found cgroups v1 and cpu controller"} {"level":"debug","component":"main","time":"2022-04-14T03:22:44Z","message":"falling back to root hierarchy"} {"level":"info","component":"main","cur":4,"prev":8,"time":"2022-04-14T03:22:44Z","message":"set GOMAXPROCS value"} {"level":"info","component":"main","version":"v4.4.1","time":"2022-04-14T03:22:44Z","message":"ready"} {"level":"info","component":"main","time":"2022-04-14T03:22:44Z","message":"launching introspection server"} {"level":"info","component":"main","time":"2022-04-14T03:22:44Z","message":"launching http transport"} {"level":"info","component":"main","time":"2022-04-14T03:22:44Z","message":"registered signal handler"} {"level":"info","component":"initialize/Services","time":"2022-04-14T03:22:44Z","message":"begin service initialization"} {"level":"info","component":"introspection/New","address":":8089","time":"2022-04-14T03:22:44Z","message":"no introspection address provided; using default"} {"level":"warn","component":"introspection/New","time":"2022-04-14T03:22:44Z","message":"no health check configured; unconditionally reporting OK"} {"level":"info","component":"introspection/Server.withPrometheus","endpoint":"/metrics","server":":8089","time":"2022-04-14T03:22:44Z","message":"configuring prometheus"} {"level":"info","component":"introspection/New","time":"2022-04-14T03:22:44Z","message":"no distributed tracing enabled"} {"level":"info","component":"libindex/New","time":"2022-04-14T03:22:44Z","message":"created database connection"} {"level":"debug","component":"internal/ctxlock/Locker.reconnect","gen":"1","time":"2022-04-14T03:22:44Z","message":"set up"} {"level":"info","component":"initialize/Services","time":"2022-04-14T03:22:44Z","message":"end service initialization"} {"level":"error","component":"main","error":"service initialization failed: failed to initialize indexer: failed to register configured scanners: failed getting id for scanner \"dpkg\": ERROR: relation \"scanner\" does not exist (SQLSTATE 42P01)","time":"2022-04-14T03:22:44Z","message":"fatal error"}
QuayRegistry CR:
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false
Steps:
- Deploy Quay with Quay Operator, choose to use unmanaged component 'clairpostgres', and create config bundle secret with 'oc create secret generic --from-file config.yaml=./config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret'
- Check Clair APP POD Status
Expected Results:
Clair APP POD come to ready status.
Actual Results:
Clair APP POD was crashed.