Uploaded image for project: 'Project Quay'
  1. Project Quay
  2. PROJQUAY-3337

Quay 3.7.0 APP POD was crashed when use unmanaged tls component

XMLWordPrintable

      Description:

      This is an issue found when Quay 3.7.0 with Quay Operator, choose to use managed route and unmanaged tls component, after quay deployment completed, found the Quay APP PODs are keep restarting, checked Quay APP POD logs, get "2022-03-02 06:57:59,711 WARN received SIGTERM indicating exit request", see detailed logs quay370_app_pod.logs 

      Note: This issue only happened when choose to unmanaged TLS Component.

      Quay Image: quay-operator-bundle-container-v3.7.0-15

      oc get pod
      quay-operator.v3.7.0-5d7c658885-qqqzn         1/1     Running            0          6h26m
      quay360-clair-app-7cbf8fc657-ttwgw            1/1     Running            0          35m
      quay360-clair-app-7cbf8fc657-v69gc            1/1     Running            0          44s
      quay360-clair-app-7cbf8fc657-vm75c            1/1     Running            0          35m
      quay360-clair-app-7cbf8fc657-vvq24            1/1     Running            0          44s
      quay360-clair-postgres-699d68c566-qzjlt       1/1     Running            1          35m
      quay360-quay-app-6b5ddfc9f9-hwf4z             0/1     CrashLoopBackOff   7          34m
      quay360-quay-app-6b5ddfc9f9-xhlpl             0/1     CrashLoopBackOff   7          34m
      quay360-quay-app-upgrade-pcxqg                0/1     Completed          0          35m
      quay360-quay-config-editor-86b88f5b-zqjm2     1/1     Running            0          35m
      quay360-quay-database-68dddccc5-k2cqs         1/1     Running            0          35m
      quay360-quay-mirror-8445665f6b-l5mc2          1/1     Running            0          35m
      quay360-quay-mirror-8445665f6b-nvrkv          1/1     Running            0          35m
      quay360-quay-redis-f48894fcb-65hjw            1/1     Running            0          35m   
      
      oc get pod quay360-quay-app-6b5ddfc9f9-hwf4z -o json | jq '.spec.containers[0].image'
      "registry.redhat.io/quay/quay-rhel8@sha256:ca8af5cda7f76a8a05e745c73245f7e0227ff93b349c874cb76ed1a480ef0c39"

      Quay Pod logs:

      nginx stdout | 127.0.0.1 () - - [02/Mar/2022:08:15:46 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.22.0" (0.002 162 0.002)
      gunicorn-registry stdout | 2022-03-02 08:15:46,071 [238] [INFO] [gunicorn.access] 127.0.0.1 - - [02/Mar/2022:08:15:46 +0000] "GET /v1/_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.22.0"
      2022-03-02 08:15:46,080 WARN received SIGTERM indicating exit request
      2022-03-02 08:15:46,080 INFO waiting for stdout, blobuploadcleanupworker, builder, buildlogsarchiver, chunkcleanupworker, dnsmasq, expiredappspecifictokenworker, exportactionlogsworker, gcworker, globalpromstats, gunicorn-registry, gunicorn-secscan, gunicorn-web, jwtproxy, logrotateworker, manifestbackfillworker, memcache, namespacegcworker, nginx, notificationworker, pushgateway, queuecleanupworker, repositoryactioncounter, repositorygcworker, securityscanningnotificationworker, securityworker, servicekey, storagereplication, teamsyncworker to die
      gunicorn-web stdout | 2022-03-02 08:15:46,080 [213] [WARNING] [py.warnings] /usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:997: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
      gunicorn-web stdout |   warnings.warn(
      nginx stdout | 127.0.0.1 () - - [02/Mar/2022:08:15:46 +0000] "GET /_internal_ping HTTP/1.1" 200 4 "-" "python-requests/2.22.0" (0.002 159 0.002)
      gunicorn-web stdout | 2022-03-02 08:15:46,082 [210] [INFO] [gunicorn.access] 127.0.0.1 - - [02/Mar/2022:08:15:46 +0000] "GET /_internal_ping HTTP/1.0" 200 4 "-" "python-requests/2.22.0"
      gunicorn-web stdout | 2022-03-02 08:15:46,083 [213] [INFO] [data.database] Connection pooling disabled for postgresql
      2022-03-02 08:15:46,091 INFO stopped: teamsyncworker (terminated by SIGTERM)
      2022-03-02 08:15:46,100 INFO stopped: storagereplication (terminated by SIGTERM)
      servicekey stdout | 2022-03-02 08:15:46,101 [86] [INFO] [apscheduler.scheduler] Scheduler has been shut down
      nginx stdout | 10.129.2.1 () - - [02/Mar/2022:08:15:46 +0000] "GET /health/instance HTTP/2.0" 200 152 "-" "kube-probe/1.21" (0.047 48 0.047)
      gunicorn-web stdout | 2022-03-02 08:15:46,106 [213] [INFO] [gunicorn.access] 10.129.2.1 - - [02/Mar/2022:08:15:46 +0000] "GET /health/instance HTTP/1.0" 200 152 "-" "kube-probe/1.21"
      2022-03-02 08:15:46,281 INFO stopped: servicekey (exit status 0)
      securityworker stdout | 2022-03-02 08:15:46,281 [85] [INFO] [apscheduler.scheduler] Scheduler has been shut down
      2022-03-02 08:15:46,471 INFO stopped: securityworker (exit status 0)  
      ......
      2022-03-02 08:15:52,847 INFO stopped: memcache (exit status 0)
      manifestbackfillworker stdout | 2022-03-02 08:15:52,848 [71] [INFO] [apscheduler.scheduler] Scheduler has been shut down
      2022-03-02 08:15:53,030 INFO stopped: manifestbackfillworker (exit status 0)
      2022-03-02 08:15:53,038 INFO stopped: logrotateworker (terminated by SIGTERM)
      jwtproxy stderr | time="2022-03-02T08:15:53Z" level=info msg="Received stop signal. Stopping gracefully..."
      2022-03-02 08:15:53,041 INFO stopped: jwtproxy (exit status 0)
      buildlogsarchiver stdout | 2022-03-02 08:15:54,241 [59] [INFO] [apscheduler.executors.default] Running job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2022-03-02 08:16:24 UTC)" (scheduled at 2022-03-02 08:15:54.240556+00:00)
      buildlogsarchiver stdout | 2022-03-02 08:15:54,256 [59] [INFO] [apscheduler.executors.default] Job "ArchiveBuildLogsWorker._archive_redis_buildlogs (trigger: interval[0:00:30], next run at: 2022-03-02 08:16:24 UTC)" executed successfully
      2022-03-02 08:15:54,820 INFO stopped: gunicorn-web (exit status 0)
      2022-03-02 08:15:55,547 INFO waiting for stdout, blobuploadcleanupworker, builder, buildlogsarchiver, chunkcleanupworker, dnsmasq, expiredappspecifictokenworker, exportactionlogsworker, gcworker, globalpromstats, gunicorn-registry, gunicorn-secscan to die
      gcworker stdout | 2022-03-02 08:15:55,546 [64] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2022-03-02 08:16:25 UTC)" (scheduled at 2022-03-02 08:15:55.546081+00:00)
      gcworker stdout | 2022-03-02 08:15:55,547 [64] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2022-03-02 08:16:25 UTC)" executed successfully
      2022-03-02 08:15:56,065 INFO stopped: gunicorn-secscan (exit status 0)
      2022-03-02 08:15:58,665 INFO stopped: gunicorn-registry (exit status 0)
      2022-03-02 08:15:58,666 INFO waiting for stdout, blobuploadcleanupworker, builder, buildlogsarchiver, chunkcleanupworker, dnsmasq, expiredappspecifictokenworker, exportactionlogsworker, gcworker, globalpromstats to die
      globalpromstats stdout | 2022-03-02 08:15:58,666 [65] [INFO] [apscheduler.scheduler] Scheduler has been shut down
      2022-03-02 08:15:58,865 INFO stopped: globalpromstats (exit status 0)
      gcworker stdout | 2022-03-02 08:15:58,866 [64] [INFO] [apscheduler.scheduler] Scheduler has been shut down
      2022-03-02 08:15:59,086 INFO stopped: gcworker (exit status 0)
      exportactionlogsworker stdout | 2022-03-02 08:15:59,087 [63] [INFO] [apscheduler.scheduler] Scheduler has been shut down
      2022-03-02 08:15:59,296 INFO stopped: exportactionlogsworker (exit status 0)
      expiredappspecifictokenworker stdout | 2022-03-02 08:15:59,296 [62] [INFO] [apscheduler.scheduler] Scheduler has been shut down
      2022-03-02 08:15:59,480 INFO stopped: expiredappspecifictokenworker (exit status 0)
      dnsmasq stderr | dnsmasq: exiting on receipt of SIGTERM
      2022-03-02 08:15:59,485 INFO stopped: dnsmasq (exit status 0)
      2022-03-02 08:16:00,495 INFO stopped: chunkcleanupworker (terminated by SIGTERM)
      buildlogsarchiver stdout | 2022-03-02 08:16:00,496 [59] [INFO] [apscheduler.scheduler] Scheduler has been shut down
      2022-03-02 08:16:00,692 INFO stopped: buildlogsarchiver (exit status 0)
      2022-03-02 08:16:00,704 INFO stopped: builder (terminated by SIGTERM)
      blobuploadcleanupworker stdout | 2022-03-02 08:16:00,704 [57] [INFO] [apscheduler.scheduler] Scheduler has been shut down
      2022-03-02 08:16:00,909 INFO stopped: blobuploadcleanupworker (exit status 0)
      2022-03-02 08:16:00,910 INFO stopped: stdout (terminated by SIGTERM)

      Quay Config.yaml:

      FEATURE_EXTENDED_REPOSITORY_NAMES: true
      CREATE_REPOSITORY_ON_PUSH_PUBLIC: true
      FEATURE_USER_INITIALIZE: true
      SERVER_HOSTNAME: quay360.apps.quay-perf-796.perfscale.devcluster.openshift.com
      ALLOWED_OCI_ARTIFACT_TYPES: 
          application/vnd.cncf.helm.config.v1+json: 
          - application/tar+gzip
          application/vnd.oci.image.layer.v1.tar+gzip+encrypted:
          - application/vnd.oci.image.layer.v1.tar+gzip+encrypted
      DEFAULT_TAG_EXPIRATION: 4w
      TAG_EXPIRATION_OPTIONS:
      - 2w
      - 4w
      - 8w
      FEATURE_GENERAL_OCI_SUPPORT: true
      FEATURE_HELM_OCI_SUPPORT: true
      SUPER_USERS:
        - quay
        - admin
      DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
        - default
      DISTRIBUTED_STORAGE_PREFERENCE:
        - default
      DISTRIBUTED_STORAGE_CONFIG:
        default:
          - S3Storage
          - s3_bucket: quay360
            storage_path: /quay360
            s3_access_key: ******
            s3_secret_key: ******
            host: s3.us-east-2.amazonaws.com 

      QuayRegistry CR File:

      apiVersion: quay.redhat.com/v1
      kind: QuayRegistry
      metadata:
        name: quay360
      spec:
        configBundleSecret: config-bundle-secret
        components:
          - kind: objectstorage
            managed: false
          - kind: route
            managed: true
          - kind: tls
            managed: false 

      Steps:

      1. Deploy Quay 3.7.0 Operator under single OCP namespace
      2. Create Quay config bundle secret "oc create secret generic --from-file config.yaml=./config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret"
      3. Create QuayRegistry, choose to use manage route and unmanaged tls, "oc create -f quayregistry_s3_tls_route_unmanaged.yaml
        "
      4. Checking Quay APP POD status

      Expected Results:

      Quay APP POD is in ready status

      Actual Results:

      Quay APP POD keep restarting until crashed.

              rmarasch@redhat.com Ricardo Maraschini (Inactive)
              lzha1981 luffy zhang
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: