Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-63919

[Chaos][GPFS][Automation] Fix tests/chaos/standard/test_standard.py::TestVMInstanceTypeOperationsPodDelete

XMLWordPrintable

    • Quality / Stability / Reliability
    • 5
    • False
    • Hide

      None

      Show
      None
    • False
    • None
    • Hide

      Last Comment by cnv-qe jira on 2025-07-21 14:11:

      New unmerged PR has been added to the task, moving status from Dev Complete to In Progress.

      Show
      Last Comment by cnv-qe jira on 2025-07-21 14:11: New unmerged PR has been added to the task, moving status from Dev Complete to In Progress.
    • None

      Not always fail, meanwhile with various error messages:

      1. test_restart_vm (ODF)

      kubernetes.client.exceptions.ApiException: (500) Reason: Internal Server Error HTTP response headers: HTTPHeaderDict({'Audit-Id': 'cf2d7ef4-45b2-4194-b410-9981615d25e0', 'Cache-Control': 'no-cache, private', 'Content-Length': '338', 'Content-Type': 'application/json', 'Date': 'Wed, 18 Jun 2025 07:14:38 GMT', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '49a07c9c-b7e9-41a8-9cd5-8cb3e1656730', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'd03cb94c-503e-4926-876b-38603f163f9c'}) HTTP response body: { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Internal error occurred: unable to complete request: stop/start already underway", "reason": "InternalError", "details": { "causes": [

      { "message": "unable to complete request: stop/start already underway" }

      ] }, "code": 500 }

       

      2. test_stop_vm (ODF, GPFS)

      kubernetes.client.exceptions.ApiException: (409) Reason: Conflict HTTP response headers: HTTPHeaderDict({'Audit-Id': '08d181f3-bc4b-46bf-92b1-39db9c20719b', 'Cache-Control': 'no-cache, private', 'Content-Length': '411', 'Content-Type': 'application/json', 'Date': 'Tue, 17 Jun 2025 18:20:46 GMT', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': 'c702f6a4-110f-45e9-b053-5d28f51a4690', 'X-Kubernetes-Pf-Prioritylevel-Uid': '132c7dca-80e7-4d73-8d02-e42bd433318f'}) HTTP response body: { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Operation cannot be fulfilled on virtualmachine.kubevirt.io \"vm-chaos-0-1750184269-3716104\": Halted only supports manual stop requests with a shorter graceperiod", "reason": "Conflict", "details": { "name": "vm-chaos-0-1750184269-3716104", "group": "kubevirt.io", "kind": "virtualmachine" }, "code": 409 }

       

      3. test_deploy_vm (GPFS)

      ________________ ERROR at setup of TestVMInstanceTypeOperationsPodDelete.test_deploy_vmvirt-api#-chaos_vms_instancetype_list0 ________________

      admin_client = <kubernetes.dynamic.client.DynamicClient object at 0x11a3926c0>
      cnv_pod_deletion_test_matrix_class_ = {'virt-api': {'interval': 5, 'max_duration': 300, 'namespace_name': 'openshift-cnv', 'pod_prefix': 'virt-api', ...}}

          @pytest.fixture(scope="class")
          def deleted_pod_by_name_prefix(admin_client, cnv_pod_deletion_test_matrix_class_):
              pod_matrix_key = [*cnv_pod_deletion_test_matrix__class__][0]
              pod_deletion_config = cnv_pod_deletion_test_matrix_class_[pod_matrix_key]

              deleted_pod_by_name_prefix = create_pod_deleting_process(
                  dyn_client=admin_client,
                  pod_prefix=pod_deletion_config["pod_prefix"],
                  namespace_name=pod_deletion_config["namespace_name"],
                  ratio=pod_deletion_config["ratio"],
                  interval=pod_deletion_config["interval"],
                  max_duration=pod_deletion_config["max_duration"],
              )
      >       deleted_pod_by_name_prefix.start()

      tests/chaos/conftest.py:400:
      _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
      ../../.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/multiprocessing/process.py:121: in start
          self._popen = self._Popen(self)
      ../../.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/multiprocessing/context.py:224: in _Popen
          return _default_context.get_context().Process._Popen(process_obj)
      ../../.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/multiprocessing/context.py:289: in _Popen
          return Popen(process_obj)
      ../../.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/multiprocessing/popen_spawn_posix.py:32: in _init_
          super()._init_(process_obj)
      ../../.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/multiprocessing/popen_fork.py:19: in _init_
          self._launch(process_obj)
      ../../.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/multiprocessing/popen_spawn_posix.py:47: in _launch
          reduction.dump(process_obj, fp)
      _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

      obj = <Process name='pod_delete' parent=10168 initial>, file = <_io.BytesIO object at 0x11a429d50>, protocol = None

          def dump(obj, file, protocol=None):
              '''Replacement for pickle.dump() using ForkingPickler.'''
      >       ForkingPickler(file, protocol).dump(obj)
      E       AttributeError: Can't get local object 'create_pod_deleting_process.<locals>._delete_pods_continuously'

      ../../.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/multiprocessing/reduction.py:60: AttributeError
      ___

              qwang@redhat.com Qixuan Wang
              nrozen@redhat.com Nir Rozen
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: