Uploaded image for project: 'Project Quay'
  1. Project Quay
  2. PROJQUAY-5520

[OMR] "--quayStorage" and "--pgStorage" can't work as expected in remote installation mode

    XMLWordPrintable

Details

    • Bug
    • Resolution: Unresolved
    • Major
    • None
    • omr-v1.3.5
    • OMR
    • False
    • None
    • False
    • 0

    Description

      Description of problem:

      Install OMR1.3.5 remotely with --pgStorage attribute, the deployment process fails, the postgresql container can't start in remote VM.  

      Install OMR1.3.5 remotely with --quayStorage attribute, the deployment process finishes successfully, but push image to quay fails. 

      Version-Release number of selected component (if applicable):

      brew.registry.redhat.io/rh-osbs/openshift-mirror-registry-rhel8:v1.3.5-1

       

      Steps to Reproduce:

      1. Prepare two VMs in AWS clould platform

       

      2. Run mirror-registry install on VM1 to install OMR on VM2

      # /home/ec2-user/mirror-registry install --initPassword=12345678 --quayHostname=ec2-18-207-104-83.compute-1.amazonaws.com --sslCert=/home/ec2-user/ssl.crt --sslKey=/home/ec2-user/ssl.key --sslCheckSkip=true --quayRoot=/home/ec2-user/quay --pgStorage=/home/ec2-user/quay/postgresql  --quayStorage=/home/ec2-user/quay/quay-storage --targetHostname=ec2-18-207-104-83.compute-1.amazonaws.com --ssh-key=/home/ec2-user/quay_ssh --targetUsername=ec2-user -v
      ......
      TASK [mirror_appliance : Install Postgres Service] *************************************************************************************
      included: /runner/project/roles/mirror_appliance/tasks/install-postgres-service.yaml for ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com
      TASK [mirror_appliance : Create necessary directory for Postgres persistent data] ******************************************************
      ok: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Set permissions on local storage directory] *******************************************************************
      ok: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Copy Postgres systemd service file] ***************************************************************************
      ok: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Check if Postgres image is loaded] ****************************************************************************
      changed: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Pull Postgres image] ******************************************************************************************
      skipping: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Create Postgres Storage named volume] *************************************************************************
      ok: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Start Postgres service] ***************************************************************************************
      changed: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Wait for pg_trgm to be installed] *****************************************************************************
      FAILED - RETRYING: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]: Wait for pg_trgm to be installed (20 retries left).
      FAILED - RETRYING: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]: .........
      FAILED - RETRYING: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]: Wait for pg_trgm to be installed (1 retries left).
      fatal: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]: FAILED! => {"attempts": 20, "changed": true, "cmd": ["podman", "exec", "-it", "quay-postgres", "/bin/bash", "-c", "echo 'CREATE EXTENSION IF NOT EXISTS pg_trgm' | psql -d quay -U postgres"], "delta": "0:00:00.048564", "end": "2023-05-17 05:36:06.805713", "msg": "non-zero return code", "rc": 125, "start": "2023-05-17 05:36:06.757149", "stderr": "Error: no container with name or ID \"quay-postgres\" found: no such container", "stderr_lines": ["Error: no container with name or ID \"quay-postgres\" found: no such container"], "stdout": "", "stdout_lines": []}
      PLAY RECAP *****************************************************************************************************************************
      ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com : ok=28   changed=11   unreachable=0    failed=1    skipped=3    rescued=0    ignored=0   
      

       

      3. check containers  and files status in remote VM2

      # podman ps -a
      CONTAINER ID  IMAGE                                        COMMAND     CREATED             STATUS      PORTS                   NAMES
      3bae43035d16  registry.access.redhat.com/ubi8/pause:8.7-6  infinity    About a minute ago  Created     0.0.0.0:8443->8443/tcp  6f74d894b65c-infra
       
      
      $ tree quay/
      quay/
      ├── image-archive.tar
      ├── pause.tar
      ├── postgresql
      ├── postgres.tar
      ├── quay-storage
      ├── quay.tar
      └── redis.tar
      
      
      $ podman volume ls
       .......
      local       pg-storage

       

      4. Remove `--pgStorage` attribute from the installation command and run command again in VM1 

      # /home/ec2-user/mirror-registry install --initPassword=12345678 --sslCheckSkip=true --quayHostname=ec2-18-207-104-83.compute-1.amazonaws.com:8443 --sslCert=/home/ec2-user/ssl.crt --sslKey=/home/ec2-user/ssl.key --targetHostname=ec2-18-207-104-83.compute-1.amazonaws.com --ssh-key=/home/ec2-user/quay_ssh --targetUsername=ec2-user -v --quayRoot=/home/ec2-user/quay --quayStorage=/home/ec2-user/quay/quay-storage 
       .......
      changed: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Pull Quay image] **********************************************************************************************
      skipping: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Create Quay Storage named volume] *****************************************************************************
      changed: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Start Quay service] *******************************************************************************************
      changed: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Wait for Quay] ************************************************************************************************
      included: /runner/project/roles/mirror_appliance/tasks/wait-for-quay.yaml for ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com
      TASK [mirror_appliance : Waiting up to 3 minutes for Quay to become alive at https://ec2-18-207-104-83.compute-1.amazonaws.com:8443/health/instance] ***
      FAILED - RETRYING: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]: Waiting up to 3 minutes for Quay to become alive at https://ec2-18-207-104-83.compute-1.amazonaws.com:8443/health/instance (10 retries left).
      FAILED - RETRYING: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]: Waiting up to 3 minutes for Quay to become alive at https://ec2-18-207-104-83.compute-1.amazonaws.com:8443/health/instance (9 retries left).
      ok: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Create init user] *********************************************************************************************
      included: /runner/project/roles/mirror_appliance/tasks/create-init-user.yaml for ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com
      TASK [mirror_appliance : Creating init user at endpoint https://ec2-18-207-104-83.compute-1.amazonaws.com:8443/api/v1/user/initialize] ***
      ok: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      TASK [mirror_appliance : Enable lingering for systemd user processes] ******************************************************************
      changed: [ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com]
      PLAY RECAP *****************************************************************************************************************************
      ec2-user@ec2-18-207-104-83.compute-1.amazonaws.com : ok=51   changed=29   unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   
      INFO[2023-05-17 07:41:06] Quay installed successfully, config data is stored in /home/ec2-user/quay 
      INFO[2023-05-17 07:41:06] Quay is available at https://ec2-18-207-104-83.compute-1.amazonaws.com:8443 with credentials (init, 12345678) 
      

       

      5. check containers and files status in VM2

      [ec2-user@ip-10-0-4-185 logs]$ podman ps -a
      CONTAINER ID  IMAGE                                                    COMMAND         CREATED         STATUS         PORTS                   NAMES
      130d8f75b87d  registry.access.redhat.com/ubi8/pause:8.7-6              infinity        49 minutes ago  Up 49 minutes  0.0.0.0:8443->8443/tcp  66e8e15966a4-infra
      961b01cf6c74  registry.redhat.io/rhel8/postgresql-10:1-203.1669834630  run-postgresql  48 minutes ago  Up 48 minutes  0.0.0.0:8443->8443/tcp  quay-postgres
      e2202d0a3eb4  registry.redhat.io/rhel8/redis-6:1-92.1669834635         run-redis       48 minutes ago  Up 48 minutes  0.0.0.0:8443->8443/tcp  quay-redis
      df9f33f213d8  registry.redhat.io/quay/quay-rhel8:v3.8.7                registry        48 minutes ago  Up 48 minutes  0.0.0.0:8443->8443/tcp  quay-app  
      
       $ tree /home/ec2-user/quay/
      /home/ec2-user/quay/
      ├── image-archive.tar
      ├── pause.tar
      ├── postgresql
      ├── postgres.tar
      ├── quay-config
      │   ├── config.yaml
      │   ├── ssl.cert
      │   └── ssl.key
      ├── quay-storage
      ├── quay.tar
      └── redis.tar
      
      $ podman volume ls
      DRIVER      VOLUME NAME
      local       05642cb1a2bba3c9a1766027f383d21b3e425c175420a7483200951676bece3b
      .......
      local       pg-storage
      local       quay-storage
      

       

      6 push image to quay

      $ podman push --tls-verify=false ec2-18-207-104-83.compute-1.amazonaws.com:8443/org1/repo1:ppc64le
      Getting image source signatures
      Copying blob 1ec79c72d048 [--------------------------------------] 8.0b / 6.7MiB
      Error: writing blob: initiating layer upload to /v2/org1/repo1/blobs/uploads/ in ec2-18-207-104-83.compute-1.amazonaws.com:8443: received unexpected HTTP status: 502 Bad Gateway

       

      7. check push data in VM2 

      $ tree /home/ec2-user/quay/quay-storage/
      /home/ec2-user/quay/quay-storage/
      0 directories, 0 files

       

      8. check push data in podman volume in VM2

      $ podman  inspect quay-storage
      [
           {
                "Name": "quay-storage",
                "Driver": "local",
                "Mountpoint": "/home/ec2-user/.local/share/containers/storage/volumes/quay-storage/_data",
                "CreatedAt": "2023-05-17T13:31:28.099994151Z",
                "Labels": {},
                "Scope": "local",
                "Options": {},
                "MountCount": 0,
                "NeedsCopyUp": true,
                "NeedsChown": true
           }
      ] 
      
      $ tree /home/ec2-user/.local/share/containers/storage/volumes/quay-storage/_data
      /home/ec2-user/.local/share/containers/storage/volumes/quay-storage/_data
      0 directories, 0 files

      Actual results:

      When run mirror-registry install with --pgStorage attribute in remote mode, the postgresql container starts in remote VM, but stops and be removed very quickly. This results in the installation process fails. Due to the postgresql container is removed, so there is no way to get error log. 

      When run mirror-registry install  with --quayStorage attribute in remote mode,  the push image operation fails. It seems the image data isn't saved in the directory which is set by --quayStorage attribute .  The podman volume quay-storage still is created.

      Expected results:

      When run mirror-registry install with --pgStorage  and --quayStorage  attributes in remote mode, the installation should finish successfully, and quay DB data should be saved in where the --pgStorage attribute sets.  The image blob should be saved in where the --quayStorage  attributes sets.

      Attachments

        Activity

          People

            Unassigned Unassigned
            rhwhu Weihua Hu
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: