-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
rhel-9.2.0
-
None
-
Low
-
rhel-virt-storage
-
None
-
False
-
False
-
-
None
-
None
-
None
-
None
-
Unspecified
-
Unspecified
-
Unspecified
-
None
This issue was first reported to RHOSP 17.1 in https://issues.redhat.com/browse/OSPRH-20737 and the customer was able to reproduce the performance degradation with a "vanilla" KVM VM. I am copying some content from that bug description.
The customer is using IBM Storwize FC with the VMs.
The customer case is https://access.redhat.com/support/cases/#/case/04254080
What were you trying to do that didn't work?
The customer has migrated from VMware to RHOSP 17.1 and has observed nearly a 50% disk I/O performance degradation for their applications.
What is the impact of this issue to you?
End customers complaining that their application response times are too slow and almost unusable.
Please provide the package NVR for which the bug is seen:
From one VM's qemu log:
2025-09-18 10:01:59.265+0000: starting up libvirt version: 9.0.0, package: 10.13.el9_2 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2024-12-20-11:16:16, ), qemu version: 7.2.0qemu-kvm-7.2.0-14.el9_2.18, kernel: 5.14.0-284.118.1.el9_2.x86_64
How reproducible is this bug?:
The customer is able to reproduce it.
Steps to reproduce
- Customer migrates VMs from legacy vmware to OSP 17.1.x
- Disk I/O performance tests executed directly on the HW node using mapped volumes works as expected and to customer satisfaction
- Execute performance tests with VMs running on this compute and using the mapped volumes.
Expected results
No decrease in disk I/O throughput rate
Actual results
Close to 50% drop in disk I/O throughput rate
- is depended on by
-
OSPRH-20737 Applications become very slow after migrating to kvm
-
- New
-