Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-8814

Adoption test suite can fail on cinder-scheduler not being available yet

XMLWordPrintable

    • False
    • Hide

      None

      Show
      None
    • False
    • ?
    • ?
    • ?
    • ?
    • None
    • Moderate

      A CI job failed like this:

       

      TASK [cinder_adoption : Pause to allow backend drivers to start] ***************
      skipping: [localhost] => {"changed": false, "false_condition": "cinder_volume_backend | default('') != '' or cinder_backup_backend | default('') != ''", "skip_reason": "Conditional result was False"}
      
      TASK [cinder_adoption : check that Cinder is reachable and its endpoints are defined] ***
      FAILED - RETRYING: [localhost]: check that Cinder is reachable and its endpoints are defined (60 retries left).
      changed: [localhost] => {"attempts": 2, "changed": true, "cmd": "set -euxo pipefail\n\n\nalias openstack=\"oc exec -t openstackclient -- openstack\"\n\n${BASH_ALIASES[openstack]} endpoint list | grep cinder\n${BASH_ALIASES[openstack]} volume type list\n", "delta": "0:00:04.802908", "end": "2024-07-19 08:34:14.065022", "msg": "", "rc": 0, "start": "2024-07-19 08:34:09.262114", "stderr": "+ alias 'openstack=oc exec -t openstackclient -- openstack'\n+ oc exec -t openstackclient -- openstack endpoint list\n+ grep cinder\n+ oc exec -t openstackclient -- openstack volume type list", "stderr_lines": ["+ alias 'openstack=oc exec -t openstackclient -- openstack'", "+ oc exec -t openstackclient -- openstack endpoint list", "+ grep cinder", "+ oc exec -t openstackclient -- openstack volume type list"], "stdout": "| 9dae5dbb596c445ea4ff1d5f1f3a36c7 | regionOne | cinderv3     | volumev3     | True    | internal  | https://cinder-internal.openstack.svc:8776/v3                         |\n| bc9a52830c5a414a80567e6fbea6d121 | regionOne | cinderv3     | volumev3     | True    | public    | https://cinder-public-openstack.apps-crc.testing/v3                   |\n+--------------------------------------+---------+-----------+\n| ID                                   | Name    | Is Public |\n+--------------------------------------+---------+-----------+\n| 778c7611-609b-4e81-8cd8-2d03a49f79d9 | tripleo | True      |\n+--------------------------------------+---------+-----------+", "stdout_lines": ["| 9dae5dbb596c445ea4ff1d5f1f3a36c7 | regionOne | cinderv3     | volumev3     | True    | internal  | https://cinder-internal.openstack.svc:8776/v3                         |", "| bc9a52830c5a414a80567e6fbea6d121 | regionOne | cinderv3     | volumev3     | True    | public    | https://cinder-public-openstack.apps-crc.testing/v3                   |", "+--------------------------------------+---------+-----------+", "| ID                                   | Name    | Is Public |", "+--------------------------------------+---------+-----------+", "| 778c7611-609b-4e81-8cd8-2d03a49f79d9 | tripleo | True      |", "+--------------------------------------+---------+-----------+"]}
      
      TASK [cinder_adoption : wait for Cinder volume to be up and ready] *************
      skipping: [localhost] => {"changed": false, "false_condition": "cinder_volume_backend != ''", "skip_reason": "Conditional result was False"}
      
      TASK [cinder_adoption : wait for Cinder backup to be up and ready] *************
      skipping: [localhost] => {"changed": false, "false_condition": "cinder_backup_backend != ''", "skip_reason": "Conditional result was False"}
      
      TASK [cinder_adoption : Cinder online data migrations] *************************
      fatal: [localhost]: FAILED! => {"changed": true, "cmd": "set -euxo pipefail\n\n\noc exec -it cinder-scheduler-0 -- cinder-manage db online_data_migrations\n", "delta": "0:00:00.166316", "end": "2024-07-19 08:34:14.485782", "msg": "non-zero return code", "rc": 1, "start": "2024-07-19 08:34:14.319466", "stderr": "+ oc exec -it cinder-scheduler-0 -- cinder-manage db online_data_migrations\nDefaulted container \"cinder-scheduler\" out of: cinder-scheduler, probe\nUnable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cinder-scheduler\")", "stderr_lines": ["+ oc exec -it cinder-scheduler-0 -- cinder-manage db online_data_migrations", "Defaulted container \"cinder-scheduler\" out of: cinder-scheduler, probe", "Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cinder-scheduler\")"], "stdout": "", "stdout_lines": []} 

      But then looking at must-gather data, the cinder-scheduler pod was running fine:

       

      pod/cinder-2f0e-account-create-bdkwv                                0/1     Completed          0             3m8s
      pod/cinder-api-0                                                    2/2     Running            0             2m39s
      pod/cinder-db-create-dsnwq                                          0/1     Completed          0             3m18s
      pod/cinder-db-sync-cfpvx                                            0/1     Completed          0             3m2s
      pod/cinder-scheduler-0                                              2/2     Running            0             2m25s 

      I think we just need a wait for the scheduler to come up before we attempt the online data migrations.

              geguileo@redhat.com Gorka Eguileor
              jstransk@redhat.com Jiri Stransky
              rhos-dfg-storage-squad-cinder
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: