What were you trying to do that didn't work?
For some yet unknown reason (I spent 2 full days on this already), when some "rogue" kernel parameter is passed at boot, very often the systemd-journald.socket fails, due to systemd getting some EPOLLHUP on the listening socket just after switching root:
# systemctl status systemd-journald.socket
× systemd-journald.socket - Journal Socket
Loaded: loaded (/usr/lib/systemd/system/systemd-journald.socket; static)
Active: failed (Result: resources) since Tue 2025-12-16 11:27:40 CET; 14min ago
Duration: 19ms
Triggers: ● systemd-journald.service
Docs: man:systemd-journald.service(8)
man:journald.conf(5)
Listen: /run/systemd/journal/socket (Datagram)
/run/systemd/journal/stdout (Stream)
# journalctl -b -o short-precise | grep -B 20 "Got POLLHUP"
Dec 16 11:27:39.883137 vm-j9 systemd[1]: Reached target Switch Root.
Dec 16 11:27:39.888375 vm-j9 systemd[1]: Starting Switch Root...
Dec 16 11:27:39.895037 vm-j9 systemd[1]: Switching root.
Dec 16 11:27:39.916206 vm-j9 systemd-journald[258]: Journal stopped
Dec 16 11:27:40.852036 vm-j9 systemd-journald[258]: Received SIGTERM from PID 1 (systemd).
Dec 16 11:27:40.852186 vm-j9 kernel: audit: type=1404 audit(1765880859.936:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 16 11:27:40.852280 vm-j9 kernel: SELinux: policy capability network_peer_controls=1
Dec 16 11:27:40.852377 vm-j9 kernel: SELinux: policy capability open_perms=1
Dec 16 11:27:40.852502 vm-j9 kernel: SELinux: policy capability extended_socket_class=1
Dec 16 11:27:40.852601 vm-j9 kernel: SELinux: policy capability always_check_network=0
Dec 16 11:27:40.852699 vm-j9 kernel: SELinux: policy capability cgroup_seclabel=1
Dec 16 11:27:40.852773 vm-j9 kernel: SELinux: policy capability nnp_nosuid_transition=1
Dec 16 11:27:40.852855 vm-j9 kernel: SELinux: policy capability genfs_seclabel_symlinks=1
Dec 16 11:27:40.852942 vm-j9 kernel: audit: type=1403 audit(1765880859.982:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 16 11:27:40.853048 vm-j9 systemd[1]: Successfully loaded SELinux policy in 48.088ms.
Dec 16 11:27:40.853194 vm-j9 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.209ms.
Dec 16 11:27:40.853295 vm-j9 systemd[1]: systemd 252-55.el9_7.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 16 11:27:40.853434 vm-j9 systemd[1]: Detected virtualization kvm.
Dec 16 11:27:40.853542 vm-j9 systemd[1]: Detected architecture x86-64.
Dec 16 11:27:40.853638 vm-j9 systemd[1]: Queued start job for default target Multi-User System.
Dec 16 11:27:40.853755 vm-j9 systemd[1]: systemd-journald.socket: Got POLLHUP on a listening socket. The service probably invoked shutdown() on it, and should better not do that.
I straced systemd-journald in the initramfs and could confirm it's NOT shutting down the listening socket that was passed to the process. The listening socket is /run/systemd/journal/stdout.
So it seems to me the kernel somehow decides to send EPOLLHUP, maybe because there is some window during switchroot where the listening socket is not being listened to (since neither systemd-journald nor systemd are fully operational).
I heavily suspect it's due to serialization/deserialization during switchroot.
For sure, through attaching to systemd with strace using a generator, we can see that systemd starts polling the socket and "immediately after" it gets a EPOLLIN|EPOLLHUP on the socket. It doesn't immediately handle the socket because it's busy reaping systemd-journald which was running in the initramfs and just stopped:
1 11:27:40.356984 epoll_ctl(4<anon_inode:[eventpoll]>, EPOLL_CTL_ADD, 51<UNIX-STREAM:[15579,"/run/systemd/journal/stdout"]>, {events=EPOLLIN, data={u32=204695968, u64=94468010371488}}) = 0 <0.000005> : 1 11:27:40.373893 epoll_wait(4<anon_inode:[eventpoll]>, [{events=EPOLLIN, data={u32=203293792, u64=94468008969312}}, {events=EPOLLIN|EPOLLHUP, data={u32=204695968, u64=94468010371488}}], 36, 0) = 2 <0.000009> 1 11:27:40.373923 waitid(P_ALL, 0, {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=258, si_uid=0, si_status=0, si_utime=0, si_stime=0}, WNOHANG|WEXITED|WNOWAIT, NULL) = 0 <0.000008> 1 11:27:40.373946 openat(AT_FDCWD</>, "/proc/258/comm", O_RDONLY|O_CLOEXEC) = 12</proc/258/comm> <0.000011> 1 11:27:40.374174 fstat(12</proc/258/comm>, ...) = 0 <0.000005> 1 11:27:40.374203 read(12</proc/258/comm>, "systemd-journal\n", 1024) = 16 <0.000008> : 1 11:27:40.375070 epoll_wait(4<anon_inode:[eventpoll]>, [{events=EPOLLIN|EPOLLHUP, data={u32=204695968, u64=94468010371488}}], 36, 0) = 1 <0.000005> 1 11:27:40.375090 writev(3</dev/kmsg<char 1:11>>, [{iov_base="<27>", iov_len=4}, {iov_base="systemd", iov_len=7}, {iov_base="[1]: ", iov_len=5}, {iov_base="systemd-journald.socket: Got POLLHUP on a listening socket. The service probably invoked shutdown() on it, and should better not"..., iov_len=137}, {iov_base="\n", iov_len=1}], 5) = 154 <0.000742>
Generator used to collect the strace:
# cat /usr/lib/systemd/system-generators/01-strace #!/bin/sh nohup strace -fttTvyy -s 128 -o /run/systemd.strace -p 1 &
What is the impact of this issue to you?
Unknown impact, apart from losing logs:
# journalctl -b | grep "Failed to connect stdout" Dec 16 11:27:40 vm-j9 systemd[639]: dev-mapper-rhel\x2dswap.swap: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[640]: dev-hugepages.mount: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[641]: dev-mqueue.mount: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[642]: sys-kernel-debug.mount: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[643]: sys-kernel-tracing.mount: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[644]: kmod-static-nodes.service: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[645]: lvm2-monitor.service: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[646]: modprobe@configfs.service: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[647]: modprobe@drm.service: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[648]: modprobe@efi_pstore.service: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[649]: modprobe@fuse.service: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[650]: nis-domainname.service: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[652]: systemd-network-generator.service: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[653]: systemd-remount-fs.service: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[654]: systemd-sysctl.service: Failed to connect stdout to the journal socket, ignoring: Connection refused Dec 16 11:27:40 vm-j9 systemd[656]: systemd-udev-trigger.service: Failed to connect stdout to the journal socket, ignoring: Connection refused
Please provide the package NVR for which the bug is seen:
systemd-252-55.el9_7.7
Couldn't reproduce on RHEL10.1 but potentially present, depending on the race window.
How reproducible is this bug?
Very often
Steps to reproduce
- Add a dummy parameter to kernel command line
# grubby --args="--howdareyou" --update-kernel=ALL - Reboot
Expected results
No such message and no socket failure:
Dec 16 11:27:40.853755 vm-j9 systemd[1]: systemd-journald.socket: Got POLLHUP on a listening socket. The service probably invoked shutdown() on it, and should better not do that.