-
Bug
-
Resolution: Done
-
Normal
-
rhos-18.0.4
-
None
-
2
-
False
-
-
False
-
?
-
python-whitebox-neutron-tests-tempest-0.9.2-18.0.20250324154821.0bbc254.el9osttrunk
-
None
-
-
-
Neutron Sprint 9, Neutron Sprint 10, Neutron Sprint 11
-
3
-
Moderate
To Reproduce Steps to reproduce the behavior:
- The issue is random and happens as test_neutron_api_restart do not wait for pods to rollout and that triggers issue in follow up tests that runs just after it.
Expected behavior
- test_neutron_api_restart should ensure stale pods not left over and cause issues in follow up tests.
Screenshots
- test_neutron_api_restart pass and next test fails as below:-
{0} whitebox_neutron_tempest_plugin.tests.scenario.test_api_server.NeutronAPIServerTest.test_neutron_api_restart [53.368208s] ... ok {0} setUpClass (whitebox_neutron_tempest_plugin.tests.scenario.test_dvr_ovn.OvnDvrAdvancedTest) [0.000000s] ... FAILED Captured traceback: ~~~~~~~~~~~~~~~~~~~ Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/tempest/test.py", line 185, in setUpClass raise value.with_traceback(trace) File "/usr/lib/python3.9/site-packages/tempest/test.py", line 178, in setUpClass cls.resource_setup() File "/usr/lib/python3.9/site-packages/whitebox_neutron_tempest_plugin/tests/scenario/test_dvr_ovn.py", line 768, in resource_setup super(OvnDvrAdvancedTest, cls).resource_setup() File "/usr/lib/python3.9/site-packages/whitebox_neutron_tempest_plugin/tests/scenario/base.py", line 1034, in resource_setup super(BaseTempestTestCaseAdvanced, cls).resource_setup() File "/usr/lib/python3.9/site-packages/whitebox_neutron_tempest_plugin/tests/scenario/test_dvr_ovn.py", line 75, in resource_setup config_files = cls.get_configs_of_service() File "/usr/lib/python3.9/site-packages/whitebox_neutron_tempest_plugin/tests/scenario/base.py", line 392, in get_configs_of_service return cls.proxy_host_client.exec_command( File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f return self(f, *args, **kw) File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ do = self.iter(retry_state=retry_state) File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter return fut.result() File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result return self.__get_result() File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ result = fn(*args, **kwargs) File "/usr/lib/python3.9/site-packages/neutron_tempest_plugin/common/ssh.py", line 172, in exec_command return super(Client, self).exec_command(cmd=cmd, encoding=encoding) File "/usr/lib/python3.9/site-packages/tempest/lib/common/ssh.py", line 238, in exec_command raise exceptions.SSHExecCommandFailed( neutron_tempest_plugin.common.utils.SSHExecCommandFailed: Command 'oc -n openstack rsh neutron-774b6b77d7-hxctg find /etc/neutron/neutron.conf.d -type f' failed, exit status: 1, stderr: Defaulted container "neutron-api" out of: neutron-api, neutron-httpd error: Internal error occurred: error executing command in container: container is not created or running stdout:
Device Info (please complete the following information):
- Seen in unidelta, unialpha uni jobs
Bug impact
- CI stability issues
Known workaround
- Please add any known workarounds.
Example build failures:-
The test just wait for neutron api to work[1] but that works even before the rollout finishes and pods are recreated one after the other. We should ensure it ensure rollout finishes before moving forward to avoid such issues.
Didn't find whitebox tests component so adding python-neutron-tests-tempest
Also related slack thread https://redhat-internal.slack.com/archives/C06SCNF3RFD/p1737019080194319