-
Task
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
None
-
rhel-kernel-ft-plumbers-1
-
None
-
False
-
-
None
After upgrading our rhel-9-8 image with:
irqbalance (2:1.9.4-4.el9 -> 2:1.9.4-5.el9)
kernel (5.14.0-628.el9 -> 5.14.0-631.el9)
kernel-core (5.14.0-628.el9 -> 5.14.0-631.el9)
kernel-devel (5.14.0-628.el9 -> 5.14.0-631.el9)
kernel-headers (5.14.0-628.el9 -> 5.14.0-631.el9)
kernel-modules (5.14.0-628.el9 -> 5.14.0-631.el9)
kernel-modules-core (5.14.0-628.el9 -> 5.14.0-631.el9)
kernel-tools (5.14.0-628.el9 -> 5.14.0-631.el9)
kernel-tools-libs (5.14.0-628.el9 -> 5.14.0-631.el9)
kmod-kvdo (8.2.6.3-178.el9 -> 8.2.6.3-179.el9)
python3-perf (5.14.0-628.el9 -> 5.14.0-631.el9)
rhel-system-roles (1.110.0-0.1.el9 -> 1.110.1-1.1.el9)
Our mdadm related tests now fail with:
> warn: Error starting RAID array: Process reported exit code 1: mdadm: Unable to initialize sysfs
This rings a bell! We already ran into this in 6.17rc0, so I suppose something which was backported regresses mdadm. Which I believe was fixed in Fedora by patching mdadm.
As the kernel regresses our test, I have reported it here maybe xni@redhat.com should be kept in the loop?