Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-19451

[18.0]Cannot delete a VM that uses a PCI device where the matching device_spec is removed if PCI in Placement is enabled

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • rhos-18.0.z
    • rhos-18.0.0
    • openstack-nova
    • Sprint 5 Quasar & Pulsar
    • 1
    • Moderate

      Note that it is also reported upstream as https://bugs.launchpad.net/nova/+bug/2115905.

       

      If a device_spec is removed while the device matching it is in use nova-compute will raise a warning at startup. Our doc says:

      https://docs.openstack.org/nova/latest/admin/pci-passthrough.html#pci-tracking-in-placement

      Reconfiguring the PCI devices on the hypervisor or changing the pci.device_spec configuration option and restarting the nova-compute service is supported in the following cases:

        * new devices are added

        * devices without allocation are removed

        Removing a device that has allocations is not supported. If a device having any allocation is removed then the nova-compute service will keep the device and the allocation exists in the nova DB and in placement and logs a warning. If a device with any allocation is reconfigured in a way that an allocated PF is removed and VFs from the same PF is configured (or vice versa) then nova-compute will refuse to start as it would create a situation where both the PF and its VFs are made available for consumption.

      The actual warning says:

      Unable to remove device with status 'allocated' and ownership 818f2460-61ff-449e-b3c4-9e3626e01645 because of PCI device 1:0000:07:10.2 is allocated instead of ['available', 'unavailable', 'unclaimable']. Check your [pci]device_spec configuration to make sure this allocated device is whitelisted. If you have removed the device from the whitelist intentionally or the device is no longer available on the host you will need to delete the server or migrate it to another host to silence this warning.: nova.exception.PciDeviceInvalidStatus: PCI device 1:0000:07:10.2 is allocated instead of ['available', 'unavailable', 'unclaimable'] 

      But the suggestion of delete the server (and probably the other to migrate it) is wrong. Trying to delete the server in this state causing the VM to go to ERROR state and cannot be deleted.

      The only way to delete this VM then is to manually delete the placement allocation of the VM first, then delete the VM. This is pretty dangerous, so it should not be suggested.

      The clean way to avoid this is not to remove the dev_spec whil the device is in use. Or if it is removed then put it back, delete the VM and then remove the dev_spec. However if this problem is triggered not by a manual reconfiguration of the dev_spec but a device disappearing from the hypervisor, putting it back to delete the VM is not an option.

      See the full reproduction steps and stack traces below:

      
      stack@aio:~$ openstack resource provider list
      +--------------------------------------+-------------------------------------+------------+--------------------------------------+--------------------------------------+
      | uuid                                 | name                                | generation | root_provider_uuid                   | parent_provider_uuid                 |
      +--------------------------------------+-------------------------------------+------------+--------------------------------------+--------------------------------------+
      | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | aio                                 |          2 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | None                                 |
      | 1efb8cd6-89b5-48ec-86f8-5c8fffef5508 | aio_0000:07:00.0                    |          2 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f |
      | 2307b997-cbb4-4c47-8812-5b3f60b7082f | aio_0000:08:00.0                    |          2 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f |
      | 1555dde0-882e-495d-a002-6e07118e62cc | aio_0000:09:00.0                    |          2 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f |
      | a639b233-4b94-43ba-802f-9455b3dad2d0 | aio_0000:0A:00.0                    |          2 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f |
      | fe251030-ca7a-4e15-a7ce-e3b4fe4b826b | aio_0000:0B:00.0                    |          2 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f |
      | f2aa0958-e512-4f8e-943a-3174f519b0ad | aio_0000:0C:00.0                    |          2 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f |
      | 37924234-7248-4822-bcba-065e94fdf521 | aio_0000:0D:00.0                    |          2 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f |
      | 31b21568-8d05-5d9c-a045-6956ac62790a | aio:Open vSwitch agent              |          2 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f |
      | 1110cf59-cabf-526c-bacc-08baabbac692 | aio:Open vSwitch agent:br-test      |          2 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 31b21568-8d05-5d9c-a045-6956ac62790a |
      | 9734f92c-16da-585b-a19c-e3d4f30302fe | aio:NIC Switch agent                |          1 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f |
      | 65012506-3cd0-584e-ada4-f0132385842c | aio:NIC Switch agent:enp6s0         |          2 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 9734f92c-16da-585b-a19c-e3d4f30302fe |
      | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | compute1                            |          2 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | None                                 |
      | acc895fc-a4bb-459d-8ecc-6990b1bcb011 | compute1_0000:07:00.0               |          2 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 |
      | ccf6b4b2-aa36-49dc-90a0-dd45683d93fd | compute1_0000:08:00.0               |          2 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 |
      | 0b6de83e-0623-4212-b19b-43bd627d9df1 | compute1_0000:09:00.0               |          2 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 |
      | 4807d406-7a5d-4f87-ac0c-fd69b012f2d2 | compute1_0000:0A:00.0               |          2 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 |
      | 0e01943c-5dfb-40c8-831c-40d894e307a8 | compute1_0000:0B:00.0               |          2 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 |
      | 96c43ec1-7622-4ba3-8899-5adade9340d5 | compute1_0000:0C:00.0               |          2 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 |
      | bad53b5c-a4e2-482b-84d9-0e036fad357b | compute1_0000:0D:00.0               |          2 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 |
      | fb0754fa-1ab6-5af7-8bd2-9f45cbd645e3 | compute1:NIC Switch agent           |          1 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 |
      | 6cd5ef1e-19cc-5fab-9ba6-0beb53c42a07 | compute1:NIC Switch agent:enp6s0    |          2 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | fb0754fa-1ab6-5af7-8bd2-9f45cbd645e3 |
      | 1b3fbba6-730a-5bac-ab80-57c3b37fe92e | compute1:Open vSwitch agent         |          2 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 |
      | 83253a2b-c9f4-5bcf-9eda-bf2d438ae7ee | compute1:Open vSwitch agent:br-test |          2 | 1846a20c-c7a8-41cb-8b82-1554b8b010f3 | 1b3fbba6-730a-5bac-ab80-57c3b37fe92e |
      +--------------------------------------+-------------------------------------+------------+--------------------------------------+--------------------------------------+
      stack@aio:~$ openstack flavor list
      +--------------------------------------+-----------+-------+------+-----------+-------+-----------+
      | ID                                   | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
      +--------------------------------------+-----------+-------+------+-----------+-------+-----------+
      | 1                                    | m1.tiny   |   512 |    1 |         0 |     1 | True      |
      | 2                                    | m1.small  |  2048 |   20 |         0 |     1 | True      |
      | 3                                    | m1.medium |  4096 |   40 |         0 |     2 | True      |
      | 4                                    | m1.large  |  8192 |   80 |         0 |     4 | True      |
      | 42                                   | m1.nano   |   192 |    1 |         0 |     1 | True      |
      | 5                                    | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
      | 84                                   | m1.micro  |   256 |    1 |         0 |     1 | True      |
      | bbef66ee-336e-4d1f-b491-5fe40fd4fd8b | m1.vf1    |  2048 |    4 |         0 |     1 | True      |
      | c1                                   | cirros256 |   256 |    1 |         0 |     1 | True      |
      | d1                                   | ds512M    |   512 |    5 |         0 |     1 | True      |
      | d2                                   | ds1G      |  1024 |   10 |         0 |     1 | True      |
      | d3                                   | ds2G      |  2048 |   10 |         0 |     2 | True      |
      | d4                                   | ds4G      |  4096 |   20 |         0 |     4 | True      |
      +--------------------------------------+-----------+-------+------+-----------+-------+-----------+
      stack@aio:~$ openstack image list
      open+--------------------------------------+--------------------------+--------+
      | ID                                   | Name                     | Status |
      +--------------------------------------+--------------------------+--------+
      | abfbecbb-3a45-4e32-928e-33247e3b0f77 | cirros-0.6.3-x86_64-disk | active |
      | 7655168a-dcea-416f-ba80-44dd9bca6089 | ubuntu-24.04             | active |
      +--------------------------------------+--------------------------+--------+
      stack@aio:~$ openstack server create --image cirros-0.6.3-x86_64-disk --flavor m1.vf1 --nic none vm1 --wait
      
      +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | Field                               | Value                                                                                                                                                                      |
      +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | OS-DCF:diskConfig                   | MANUAL                                                                                                                                                                     |
      | OS-EXT-AZ:availability_zone         | nova                                                                                                                                                                       |
      | OS-EXT-SRV-ATTR:host                | aio                                                                                                                                                                        |
      | OS-EXT-SRV-ATTR:hostname            | vm1                                                                                                                                                                        |
      | OS-EXT-SRV-ATTR:hypervisor_hostname | aio                                                                                                                                                                        |
      | OS-EXT-SRV-ATTR:instance_name       | instance-00000001                                                                                                                                                          |
      | OS-EXT-SRV-ATTR:kernel_id           | None                                                                                                                                                                       |
      | OS-EXT-SRV-ATTR:launch_index        | None                                                                                                                                                                       |
      | OS-EXT-SRV-ATTR:ramdisk_id          | None                                                                                                                                                                       |
      | OS-EXT-SRV-ATTR:reservation_id      | r-qwo64t0d                                                                                                                                                                 |
      | OS-EXT-SRV-ATTR:root_device_name    | /dev/vda                                                                                                                                                                   |
      | OS-EXT-SRV-ATTR:user_data           | None                                                                                                                                                                       |
      | OS-EXT-STS:power_state              | Running                                                                                                                                                                    |
      | OS-EXT-STS:task_state               | None                                                                                                                                                                       |
      | OS-EXT-STS:vm_state                 | active                                                                                                                                                                     |
      | OS-SRV-USG:launched_at              | 2025-07-03T13:54:58.000000                                                                                                                                                 |
      | OS-SRV-USG:terminated_at            | None                                                                                                                                                                       |
      | accessIPv4                          | None                                                                                                                                                                       |
      | accessIPv6                          | None                                                                                                                                                                       |
      | addresses                           | N/A                                                                                                                                                                        |
      | adminPass                           | 3SkwEnczczMW                                                                                                                                                               |
      | config_drive                        | None                                                                                                                                                                       |
      | created                             | 2025-07-03T13:54:45Z                                                                                                                                                       |
      | description                         | None                                                                                                                                                                       |
      | flavor                              | description=, disk='4', ephemeral='0', extra_specs.pci_passthrough:alias='nic-vf:1', id='m1.vf1', is_disabled=, is_public='True', location=, name='m1.vf1',                |
      |                                     | original_name='m1.vf1', ram='2048', rxtx_factor=, swap='0', vcpus='1'                                                                                                      |
      | hostId                              | ce6b9f7d1d53050b7fb455bbeae02c1a331cd614fc2353591d53bbb5                                                                                                                   |
      | host_status                         | UP                                                                                                                                                                         |
      | id                                  | 818f2460-61ff-449e-b3c4-9e3626e01645                                                                                                                                       |
      | image                               | cirros-0.6.3-x86_64-disk (abfbecbb-3a45-4e32-928e-33247e3b0f77)                                                                                                            |
      | key_name                            | None                                                                                                                                                                       |
      | locked                              | None                                                                                                                                                                       |
      | locked_reason                       | None                                                                                                                                                                       |
      | name                                | vm1                                                                                                                                                                        |
      | pinned_availability_zone            | None                                                                                                                                                                       |
      | progress                            | None                                                                                                                                                                       |
      | project_id                          | 8d91661d06a7416eb112dafb94c1fc61                                                                                                                                           |
      | properties                          | None                                                                                                                                                                       |
      | scheduler_hints                     |                                                                                                                                                                            |
      | security_groups                     | name='default'                                                                                                                                                             |
      | server_groups                       | None                                                                                                                                                                       |
      | status                              | ACTIVE                                                                                                                                                                     |
      | tags                                |                                                                                                                                                                            |
      | trusted_image_certificates          | None                                                                                                                                                                       |
      | updated                             | 2025-07-03T13:54:58Z                                                                                                                                                       |
      | user_id                             | 1a77929331a24ffbb79e06f5f74e4a6a                                                                                                                                           |
      | volumes_attached                    |                                                                                                                                                                            |
      +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      stack@aio:~$ virsh dumpxml 1 | grep address
            <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
            <listen type='address' address='0.0.0.0'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
              <address domain='0x0000' bus='0x07' slot='0x10' function='0x2'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
      stack@aio:~$ openstack resource provider inventory list 1efb8cd6-89b5-48ec-86f8-5c8fffef5508
      +----------------------+------------------+----------+----------+----------+-----------+-------+------+
      | resource_class       | allocation_ratio | min_unit | max_unit | reserved | step_size | total | used |
      +----------------------+------------------+----------+----------+----------+-----------+-------+------+
      | CUSTOM_PCI_8086_10CA |              1.0 |        1 |        6 |        0 |         1 |     6 |    1 |
      +----------------------+------------------+----------+----------+----------+-----------+-------+------+
      stack@aio:~$ openstack resource provider list | grep 1efb8cd6-89b5-48ec-86f8-5c8fffef5508
      | 1efb8cd6-89b5-48ec-86f8-5c8fffef5508 | aio_0000:07:00.0                    |          3 | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f | 0ec8431e-30d7-467d-bae3-5dd5101cbe0f |
      
      # removed the device_spec that matches the used device
      stack@aio:~$ vim /etc/nova/nova-cpu.conf 
      stack@aio:~$ 
      stack@aio:~$ sudo systemctl restart devstack@n-cpu
      stack@aio:~$ 
      
      
      # during the nova-compute restart it logs a warning
      
      Jul 03 13:59:58 aio nova-compute[129926]: WARNING nova.pci.manager [None req-10a2b0f3-1d12-4d7d-b2bd-c6ee1157a04a None None] Unable to remove device with status 'allocated' and ownership 818f2460-61ff-449e-b3c4-9e3626e01645 because of PCI device 1:0000:07:10.2 is allocated instead of ['available', 'unavailable', 'unclaimable']. Check your [pci]device_spec configuration to make sure this allocated device is whitelisted. If you have removed the device from the whitelist intentionally or the device is no longer available on the host you will need to delete the server or migrate it to another host to silence this warning.: nova.exception.PciDeviceInvalidStatus: PCI device 1:0000:07:10.2 is allocated instead of ['available', 'unavailable', 'unclaimable']
      
      # and also logs a stack trace
      
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.scheduler.client.report [None req-10a2b0f3-1d12-4d7d-b2bd-c6ee1157a04a None None] [req-9b23ae6b-7653-4133-8e3c-7829785e34f1] Failed to delete resource provider with UUID 1efb8cd6-89b5-48ec-86f8-5c8fffef5508 from the placement API. Got 409: \{"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n Unable to delete resource provider 1efb8cd6-89b5-48ec-86f8-5c8fffef5508: Resource provider has allocations.  ", "request_id": "req-9b23ae6b-7653-4133-8e3c-7829785e34f1"}]}.
      Jul 03 14:00:00 aio nova-compute[129926]: DEBUG oslo_concurrency.lockutils [None req-10a2b0f3-1d12-4d7d-b2bd-c6ee1157a04a None None] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.340s \{{(pid=129926) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424}}
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager [None req-10a2b0f3-1d12-4d7d-b2bd-c6ee1157a04a None None] Error updating resources for node aio.: nova.exception.ResourceProviderSyncFailed: Failed to synchronize the placement service with resource provider information supplied by the compute host.
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager Traceback (most recent call last):
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1406, in catch_all
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     yield
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1485, in update_from_provider_tree
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     self._delete_provider(uuid)
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/scheduler/client/report.py", line 762, in _delete_provider
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     raise exception.ResourceProviderInUse()
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager nova.exception.ResourceProviderInUse: Resource provider has allocations.
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager 
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager During handling of the above exception, another exception occurred:
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager 
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager Traceback (most recent call last):
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/manager.py", line 11229, in _update_available_resource_for_node
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     self.rt.update_available_resource(context, nodename,
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 965, in update_available_resource
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     self._update_available_resource(context, resources, startup=startup)
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py", line 415, in inner
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     return f(*args, **kwargs)
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager            ^^^^^^^^^^^^^^^^^^
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1096, in _update_available_resource
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     self._update(context, cn, startup=startup)
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1405, in _update
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     self._update_to_placement(context, compute_node, startup)
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 55, in wrapped_f
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     return Retrying(*dargs, **dkw).call(f, *args, **kw)
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 265, in call
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     return attempt.get(self._wrap_exception)
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 312, in get
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     raise exc.with_traceback(tb)
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 259, in call
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager                       ^^^^^^^^^^^^^^^^^^^
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1390, in _update_to_placement
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     self.reportclient.update_from_provider_tree(
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1484, in update_from_provider_tree
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     with catch_all(uuid):
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     self.gen.throw(value)
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1418, in catch_all
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager     raise exception.ResourceProviderSyncFailed()
      Jul 03 14:00:00 aio nova-compute[129926]: ERROR nova.compute.manager nova.exception.ResourceProviderSyncFailed: Failed to synchronize the placement service with resource provider information supplied by the compute host.
      
      
      # and the RP remained as is
      
      stack@aio:~$ openstack resource provider inventory list 1efb8cd6-89b5-48ec-86f8-5c8fffef5508
      +----------------------+------------------+----------+----------+----------+-----------+-------+------+
      | resource_class       | allocation_ratio | min_unit | max_unit | reserved | step_size | total | used |
      +----------------------+------------------+----------+----------+----------+-----------+-------+------+
      | CUSTOM_PCI_8086_10CA |              1.0 |        1 |        6 |        0 |         1 |     6 |    1 |
      +----------------------+------------------+----------+----------+----------+-----------+-------+------+
      
      ### then deleted the VM which lead to a bunch of stack traces and the VM ended up in ERROR state
      
      stack@aio:~$ openstack server delete vm1 --wait
      stack@aio:~$ openstack server list
      +--------------------------------------+------+--------+----------+--------------------------+--------+
      | ID                                   | Name | Status | Networks | Image                    | Flavor |
      +--------------------------------------+------+--------+----------+--------------------------+--------+
      | 818f2460-61ff-449e-b3c4-9e3626e01645 | vm1  | ERROR  |          | cirros-0.6.3-x86_64-disk | m1.vf1 |
      +--------------------------------------+------+--------+----------+--------------------------+--------+
      
      
      Jul 03 14:06:57 aio nova-compute[129926]: DEBUG oslo_concurrency.lockutils [None req-700c97c3-55a0-4b2e-94dd-4917d548188d admin admin] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.330s \{{(pid=129926) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424}}
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [None req-700c97c3-55a0-4b2e-94dd-4917d548188d admin admin] [instance: 818f2460-61ff-449e-b3c4-9e3626e01645] Setting instance vm_state to ERROR: nova.exception.ResourceProviderSyncFailed: Failed to synchronize the placement service with resource provider information supplied by the compute host.
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645] Traceback (most recent call last):
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1406, in catch_all
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     yield
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1485, in update_from_provider_tree
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     self._delete_provider(uuid)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/scheduler/client/report.py", line 762, in _delete_provider
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     raise exception.ResourceProviderInUse()
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645] nova.exception.ResourceProviderInUse: Resource provider has allocations.
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645] 
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645] During handling of the above exception, another exception occurred:
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645] 
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645] Traceback (most recent call last):
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/compute/manager.py", line 3385, in do_terminate_instance
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     self._delete_instance(context, instance, bdms)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/compute/manager.py", line 3349, in _delete_instance
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     self._complete_deletion(context, instance)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/compute/manager.py", line 929, in _complete_deletion
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     self._update_resource_tracker(context, instance)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/compute/manager.py", line 695, in _update_resource_tracker
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     self.rt.update_usage(context, instance, instance.node)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py", line 415, in inner
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     return f(*args, **kwargs)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]            ^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 732, in update_usage
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     self._update(context.elevated(), self.compute_nodes[nodename])
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1405, in _update
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     self._update_to_placement(context, compute_node, startup)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 55, in wrapped_f
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     return Retrying(*dargs, **dkw).call(f, *args, **kw)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 265, in call
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     return attempt.get(self._wrap_exception)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 312, in get
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     raise exc.with_traceback(tb)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 259, in call
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]                       ^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1390, in _update_to_placement
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     self.reportclient.update_from_provider_tree(
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1484, in update_from_provider_tree
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     with catch_all(uuid):
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     self.gen.throw(value)
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1418, in catch_all
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645]     raise exception.ResourceProviderSyncFailed()
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645] nova.exception.ResourceProviderSyncFailed: Failed to synchronize the placement service with resource provider information supplied by the compute host.
      Jul 03 14:06:57 aio nova-compute[129926]: ERROR nova.compute.manager [instance: 818f2460-61ff-449e-b3c4-9e3626e01645] 
      
       
      
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server [None req-700c97c3-55a0-4b2e-94dd-4917d548188d admin admin] Exception during message handling: nova.exception.ResourceProviderSyncFailed: Failed to synchronize the placement service with resource provider information supplied by the compute host.
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server Traceback (most recent call last):
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1406, in catch_all
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     yield
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1485, in update_from_provider_tree
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self._delete_provider(uuid)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/scheduler/client/report.py", line 762, in _delete_provider
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     raise exception.ResourceProviderInUse()
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server nova.exception.ResourceProviderInUse: Resource provider has allocations.
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server 
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred:
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server 
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server Traceback (most recent call last):
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/oslo.messaging/oslo_messaging/rpc/server.py", line 174, in _process_incoming
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/oslo.messaging/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/oslo.messaging/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server              ^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/exception_wrapper.py", line 65, in wrapped
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     with excutils.save_and_reraise_exception():
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_utils/excutils.py", line 227, in __exit__
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self.force_reraise()
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     raise self.value
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/exception_wrapper.py", line 63, in wrapped
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", line 167, in decorated_function
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     with excutils.save_and_reraise_exception():
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_utils/excutils.py", line 227, in __exit__
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self.force_reraise()
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     raise self.value
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", line 158, in decorated_function
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/utils.py", line 1483, in decorated_function
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", line 214, in decorated_function
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     with excutils.save_and_reraise_exception():
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_utils/excutils.py", line 227, in __exit__
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self.force_reraise()
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     raise self.value
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", line 204, in decorated_function
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", line 3397, in terminate_instance
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     do_terminate_instance(instance, bdms)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py", line 415, in inner
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     return f(*args, **kwargs)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server            ^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", line 3392, in do_terminate_instance
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     with excutils.save_and_reraise_exception():
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_utils/excutils.py", line 227, in __exit__
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self.force_reraise()
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     raise self.value
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", line 3385, in do_terminate_instance
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self._delete_instance(context, instance, bdms)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", line 3349, in _delete_instance
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self._complete_deletion(context, instance)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", line 929, in _complete_deletion
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self._update_resource_tracker(context, instance)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", line 695, in _update_resource_tracker
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self.rt.update_usage(context, instance, instance.node)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py", line 415, in inner
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     return f(*args, **kwargs)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server            ^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 732, in update_usage
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self._update(context.elevated(), self.compute_nodes[nodename])
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1405, in _update
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self._update_to_placement(context, compute_node, startup)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 55, in wrapped_f
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     return Retrying(*dargs, **dkw).call(f, *args, **kw)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 265, in call
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     return attempt.get(self._wrap_exception)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 312, in get
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     raise exc.with_traceback(tb)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/data/venv/lib/python3.12/site-packages/retrying.py", line 259, in call
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server                       ^^^^^^^^^^^^^^^^^^^
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1390, in _update_to_placement
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self.reportclient.update_from_provider_tree(
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1484, in update_from_provider_tree
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     with catch_all(uuid):
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     self.gen.throw(value)
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server   File "/opt/stack/nova/nova/scheduler/client/report.py", line 1418, in catch_all
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server     raise exception.ResourceProviderSyncFailed()
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server nova.exception.ResourceProviderSyncFailed: Failed to synchronize the placement service with resource provider information supplied by the compute host.
      Jul 03 14:06:59 aio nova-compute[129926]: ERROR oslo_messaging.rpc.server 
      
      

              rh-ee-bgibizer Balazs Gibizer
              rh-ee-bgibizer Balazs Gibizer
              James Parker James Parker
              rhos-workloads-compute
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: