Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-120390

'No input from event server' after lvmpolld failed (or was stopped?) during vdo testing

Linking RHIVOS CVEs to...Migration: Automation ...RHELPRIO AssignedTeam ...SWIFT: POC ConversionSync from "Extern...XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • rhel-10.2
    • lvm2
    • None
    • Yes
    • Important
    • rhel-storage-lvm
    • None
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • Unspecified
    • Unspecified
    • Unspecified
    • x86_64
    • None

      I noticed issues during a long run of vdo regression testing. I'm not exactly sure how I got into this state, but a restart of lvmpolld quickly fixed the issue. When lvm cmds can't connect, they should likely provide a better warning/error than 'No input from event server' about how to remedy this situation.

      kernel-6.12.0-126.el10    BUILT: Thu Sep  4 05:53:31 PM CEST 2025
      lvm2-2.03.35-1.el10    BUILT: Wed Sep 10 05:00:31 PM CEST 2025
      lvm2-libs-2.03.35-1.el10    BUILT: Wed Sep 10 05:00:31 PM CEST 2025
       
       
      [root@virt-497 ~]# lvcreate -m 1 -L 100M -n my_mirror vdo_sanity
        No input from event server.
        WARNING: Failed to monitor vdo_sanity/my_mirror.
        Logical volume "my_mirror" created.
       
      [root@virt-497 ~]# lvcreate --yes --type vdo -n vdo_lv  -L 25G vdo_sanity -V 25G  
          The VDO volume can address 22.00 GB in 11 data slabs, each 2.00 GB.
          It can grow to address at most 16.00 TB of physical storage in 8192 slabs.
          If a larger maximum size might be needed, use bigger slabs.
        No input from event server.
        WARNING: Failed to monitor vdo_sanity/vpool0.
        Logical volume "vdo_lv" created.
       
      [root@virt-497 ~]# systemctl status lvm2-lvmpolld.service
      â lvm2-lvmpolld.service - LVM2 poll daemon
           Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmpolld.service; static)
           Active: inactive (dead) since Thu 2025-10-09 19:57:33 CEST; 1h 49min ago
         Duration: 1min 12.110s
       Invocation: d960b086bd5348be93432849ebbd575c
      TriggeredBy: â lvm2-lvmpolld.socket
             Docs: man:lvmpolld(8)
          Process: 2202124 ExecStart=/usr/sbin/lvmpolld -t 60 -f (code=exited, status=0/SUCCESS)
         Main PID: 2202124 (code=exited, status=0/SUCCESS)
         Mem peak: 27.4M
              CPU: 100ms
       
      Oct 09 19:56:21 virt-497.cluster-qe.lab.eng.brq.redhat.com systemd[1]: Started lvm2-lvmpolld.service - LVM2 poll daemon.
      Oct 09 19:56:27 virt-497.cluster-qe.lab.eng.brq.redhat.com lvmpolld[2202124]: W:         LVPOLL: PID 2202127: STDERR: '  No input from event server.'
      Oct 09 19:56:33 virt-497.cluster-qe.lab.eng.brq.redhat.com lvmpolld[2202124]: W:         LVPOLL: PID 2202127: STDERR: '  WARNING: Failed to unmonitor vdo_sanity/vpool0.'
      Oct 09 19:56:33 virt-497.cluster-qe.lab.eng.brq.redhat.com lvmpolld[2202124]: W:         LVPOLL: PID 2202127: STDERR: '  No input from event server.'
      Oct 09 19:56:33 virt-497.cluster-qe.lab.eng.brq.redhat.com lvmpolld[2202124]: W:         LVPOLL: PID 2202127: STDERR: '  WARNING: Failed to monitor vdo_sanity/vpool0.'
      Oct 09 19:56:33 virt-497.cluster-qe.lab.eng.brq.redhat.com lvmpolld[2202124]: W:         LVPOLL: PID 2202127: STDERR: '  WARNING: This metadata update is NOT backed up.'
      Oct 09 19:57:33 virt-497.cluster-qe.lab.eng.brq.redhat.com systemd[1]: lvm2-lvmpolld.service: Deactivated successfully.
       
       
      ###################################################################################################
      Here's what was going on around: 'since Thu 2025-10-09 19:57:33 CEST' 
      Oct  9 17:57:26 virt-497 qarshd[2078088]: Running cmdline: lvchange --yes -an vdo_sanity/vdo_lv
      Oct  9 17:57:26 virt-497 kernel: device-mapper: vdo91:lvchange: suspending device '253:3'
      Oct  9 17:57:26 virt-497 kernel: device-mapper: vdo: dm_vdo91:journa: beginning save (vcn 5)
      Oct  9 17:57:26 virt-497 kernel: device-mapper: vdo: dm_vdo91:journa: finished save (vcn 5)
      Oct  9 17:57:26 virt-497 kernel: device-mapper: vdo91:lvchange: device '253:3' suspended
      Oct  9 17:57:26 virt-497 kernel: device-mapper: vdo91:lvchange: stopping device '253:3'
      Oct  9 17:57:27 virt-497 kernel: device-mapper: vdo91:lvchange: device '253:3' stopped
      Oct  9 17:57:27 virt-497 systemd[1]: qarshd@54784-10.37.165.145:5016-10.22.65.99:57948.service: Deactivated successfully.
      Oct  9 17:57:27 virt-497 systemd[1]: Started qarshd@54785-10.37.165.145:5016-10.22.65.99:57956.service - qarsh Per-Connection Server (10.22.65.99:57956).
      Oct  9 17:57:27 virt-497 qarshd[2078130]: Talking to peer ::ffff:10.22.65.99:57956 (IPv6)
      Oct  9 17:57:28 virt-497 qarshd[2078130]: Running cmdline: lvconvert --yes --merge vdo_sanity/merge
      Oct  9 17:57:28 virt-497 systemd[1]: qarshd@54785-10.37.165.145:5016-10.22.65.99:57956.service: Deactivated successfully.
      Oct  9 17:57:28 virt-497 systemd[1]: Started qarshd@54786-10.37.165.145:5016-10.22.65.99:57964.service - qarsh Per-Connection Server (10.22.65.99:57964).
      Oct  9 17:57:28 virt-497 qarshd[2078208]: Talking to peer ::ffff:10.22.65.99:57964 (IPv6)
      Oct  9 17:57:29 virt-497 qarshd[2078208]: Running cmdline: lvchange --yes -ay vdo_sanity/vdo_lv
      Oct  9 17:57:29 virt-497 kernel: device-mapper: vdo92:lvchange: table line: V4 /dev/dm-2 6553600 4096 32768 16380 deduplication on compression on maxDiscard 1 ack 1 bio 4 bioRotationInterval 64 cpu 2 hash 1 logical 1 physical 1
      Oct  9 17:57:29 virt-497 kernel: device-mapper: vdo92:lvchange: loading device '253:3'
      Oct  9 17:57:29 virt-497 kernel: device-mapper: vdo92:lvchange: zones: 1 logical, 1 physical, 1 hash; total threads: 12
      Oct  9 17:57:29 virt-497 kernel: device-mapper: vdo92:lvchange: starting device '253:3'
      Oct  9 17:57:29 virt-497 kernel: device-mapper: vdo: dm_vdo92:physQ0: VDO commencing normal operation
      Oct  9 17:57:29 virt-497 kernel: device-mapper: vdo: dm_vdo92:journa: Setting UDS index target state to online
      Oct  9 17:57:29 virt-497 kernel: device-mapper: vdo: dm_vdo92:dedupe: loading or rebuilding index: 253:2
      Oct  9 17:57:29 virt-497 kernel: device-mapper: vdo92:lvchange: device '253:3' started
      Oct  9 17:57:29 virt-497 kernel: device-mapper: vdo92:lvchange: resuming device '253:3'
      Oct  9 17:57:29 virt-497 kernel: device-mapper: vdo: dm_vdo92:dedupe: Using 1 indexing zone for concurrency.
      Oct  9 17:57:29 virt-497 kernel: device-mapper: vdo92:lvchange: device '253:3' resumed
      Oct  9 17:57:30 virt-497 dmeventd[1967215]: Monitoring VDO pool vdo_sanity-vpool0-vpool.
      Oct  9 17:57:30 virt-497 systemd[1]: Started lvm2-lvmpolld.service - LVM2 poll daemon.
      Oct  9 17:57:30 virt-497 kernel: device-mapper: vdo: dm_vdo92:dedupe: loaded index from chapter 0 through chapter 5
      Oct  9 17:57:30 virt-497 systemd[1]: qarshd@54786-10.37.165.145:5016-10.22.65.99:57964.service: Deactivated successfully.
      Oct  9 17:57:31 virt-497 systemd[1]: Started qarshd@54787-10.37.165.145:5016-10.22.65.99:57970.service - qarsh Per-Connection Server (10.22.65.99:57970).
      Oct  9 17:57:31 virt-497 qarshd[2078343]: Talking to peer ::ffff:10.22.65.99:57970 (IPv6)
      Oct  9 17:57:31 virt-497 NetworkManager[1104]: <info>  [1760025451.5226] dhcp4 (private1): state changed new lease, address=192.168.4.17
      Oct  9 17:57:31 virt-497 systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service...
      Oct  9 17:57:31 virt-497 systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service.
      Oct  9 17:57:31 virt-497 qarshd[2078343]: Running cmdline: lvs --reportformat json -o data_percent vdo_sanity/merge
      Oct  9 17:57:32 virt-497 systemd[1]: qarshd@54787-10.37.165.145:5016-10.22.65.99:57970.service: Deactivated successfully.
      Oct  9 17:57:36 virt-497 dmeventd[1967215]: No longer monitoring VDO pool vdo_sanity-vpool0-vpool.
      Oct  9 17:57:36 virt-497 dmeventd[1967215]: No longer monitoring snapshot vdo_sanity-merge.
       
      ###################################################################################################
       
       
       
      [root@virt-497 ~]# systemctl stop lvm2-lvmpolld.socket
      [root@virt-497 ~]# systemctl start lvm2-lvmpolld.service
      [root@virt-497 ~]# systemctl status lvm2-lvmpolld.service
      â lvm2-lvmpolld.service - LVM2 poll daemon
           Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmpolld.service; static)
           Active: active (running) since Thu 2025-10-09 21:47:50 CEST; 4s ago
       Invocation: 5254d63e5de743aa8b1fec058b39e89c
      TriggeredBy: â lvm2-lvmpolld.socket
             Docs: man:lvmpolld(8)
         Main PID: 2211923 (lvmpolld)
            Tasks: 1 (limit: 24947)
           Memory: 2.6M (peak: 2.7M)
              CPU: 18ms
           CGroup: /system.slice/lvm2-lvmpolld.service
                   ââ2211923 /usr/sbin/lvmpolld -t 60 -f
       
      [root@virt-497 ~]# lvcreate --yes --type vdo -n vdo_lv  -L 25G vdo_sanity -V 25G  
          The VDO volume can address 22.00 GB in 11 data slabs, each 2.00 GB.
          It can grow to address at most 16.00 TB of physical storage in 8192 slabs.
          If a larger maximum size might be needed, use bigger slabs.
      

              mcsontos@redhat.com Marian Csontos
              cmarthal@redhat.com Corey Marthaler
              Zdenek Kabelac Zdenek Kabelac
              Corey Marthaler Corey Marthaler
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

                Created:
                Updated: