Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-103801

process that has a large number of open file descriptors (84190) and when it core dumps, the systemd-coredump utility can't handle this amount of information being sent via sendmsg() to the receiver systemd-coredump

Linking RHIVOS CVEs to...Migration: Automation ...Sync from "Extern...XMLWordPrintable

    • No
    • Low
    • Customer Facing, Customer Reported
    • rhel-systemd
    • 3
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • Unspecified
    • Unspecified
    • Unspecified
    • x86_64
    • None

      What were you trying to do that didn't work?

      • cannot collect application core
      Jul 14 16:56:15 testhost systemd-coredump[12714]: Failed to send coredump datagram: No buffer space available
      Jul 14 16:56:15 testhost systemd[1]: Started Process Core Dump (PID 12714/UID 0).
      Jul 14 16:56:15 testhost systemd-coredump[12715]: Coredump file descriptor missing.
      Jul 14 16:56:15 testhost systemd[1]: systemd-coredump@1-12714-0.service: Deactivated successfully.
      

      What is the impact of this issue to you?

      • core dumps fail

      Please provide the package NVR for which the bug is seen:

      • systemd-252-32.el9_4.x86_64

      How reproducible is this bug?:

      • by customer

      Steps to reproduce

      1. unsure. Reproducer attempt failed

      Expected results

      • core dump should not fail

      Actual results

      • core dump fails

      Reproducer attempt:

      • We attempted to reproduce this by setting the buffer sizes to be unreasonably small.
        # sysctl -w net.core.wmem_default=8192 net.core.rmem_default=4096
        net.core.wmem_default=8192
        net.core.rmem_default=4096
        
        # ./files 100000 100
        sleeping 100 seconds
        Segmentation fault (core dumped)
        
        # kill -11 $(pidof files)
        
        # journalctl | tail
        Jul 11 09:13:39 test-rhel9.local systemd[1]: Created slice Slice /system/systemd-coredump.
        Jul 11 09:13:39 test-rhel9.local systemd[1]: Started Process Core Dump (PID 1660/UID 0).
        Jul 11 09:13:39 test-rhel9.local systemd-coredump[1661]: Process 1658 (files) of user 0 dumped core.
                                                                     
                                                                     Stack trace of thread 1658:
                                                                     #0  0x00007fd22f0d461a clock_nanosleep@GLIBC_2.2.5 (libc.so.6 + 0xd461a)
                                                                     #1  0x00007fd22f0d9247 __nanosleep (libc.so.6 + 0xd9247)
                                                                     #2  0x00007fd22f0d917e sleep (libc.so.6 + 0xd917e)
                                                                     #3  0x0000000000401306 n/a (/root/coredump/files + 0x1306)
                                                                     #4  0x00007fd22f0295d0 __libc_start_call_main (libc.so.6 + 0x295d0)
                                                                     #5  0x00007fd22f029680 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x29680)
                                                                     #6  0x00000000004010d5 n/a (/root/coredump/files + 0x10d5)
                                                                     ELF object binary architecture: AMD x86-64
        Jul 11 09:13:39 test-rhel9.local systemd[1]: systemd-coredump@0-1660-0.service: Deactivated successfully.
        
        # coredumpctl list
        TIME                          PID UID GID SIG     COREFILE EXE                  SIZE
        Fri 2025-07-11 09:13:39 AEST 1658   0   0 SIGSEGV present  /root/coredump/files 1.2M
        

              msekleta@redhat.com Michal Sekletar
              rhn-support-abetkike Amey Betkiker
              systemd maint mailing list systemd maint mailing list
              Frantisek Sumsal Frantisek Sumsal
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

                Created:
                Updated: