Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-21537

[RHEL-8.10] WARNING: lvmlockd process is not running.\n Global lock failed: check that lvmlockd is running

    • rhel-system-roles-1.23.0-2.6.el8
    • sst_system_roles
    • 24
    • 26
    • 3
    • QE ack, Dev ack
    • False
    • Hide

      None

      Show
      None
    • No
    • Red Hat Enterprise Linux
    • Release Note Not Required
    • None

      What were you trying to do that didn't work?

      Please provide the package NVR for which bug is seen:

       

      How reproducible:

      Steps to reproduce

      1.  after testing lvm shared , the lvm config don't restore, the other case get the error.
      2. the lvm shard have update the /etc/lvm/lvm.conf
      global {
          use_lvmlockd = 1
      }
       

      we should restore the lvm.conf to default after lvm shard testing

      global

      {     use_lvmlockd = 0 }

      or remove the use_lvmlockd in global

      Expected results

      Actual results

       

      task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:106
      fatal: [localhost]: FAILED! => {"changed": false, "msg": {"actions": [], "changed": false, "crypts": [], "failed": true, "invocation": {"module_args": {"disklabel_type": null, "diskvolume_mkfs_option_map": {}, "packages_only": false, "pool_defaults": {"disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": []}, "pools": [{"disks": ["sda"], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "name": "foo", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [{"_device": "/dev/mapper/foo-test1", "_mount_id": "/dev/mapper/foo-test1", "_raw_device": "/dev/mapper/foo-test1", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/opt/test1", "mount_user": null, "name": "test1", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "5g", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null}]}], "safe_mode": false, "use_partitions": null, "volume_defaults": {"cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": 0, "state": "present", "thin": null, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null}, "volumes": []}}, "leaves": [], "mounts": [], "msg": "Failed to commit changes to disk: Process reported exit code 5:   WARNING: lvmlockd process is not running.\n  Global lock failed: check that lvmlockd is running.\n", "packages": ["dosfstools", "nvme-cli", "xfsprogs", "lvm2"], "pools": [], "volumes": []}}
       

      https://beaker.engineering.redhat.com/recipes/15345800#task171729196

            rmeggins@redhat.com Richard Megginson
            guazhang@redhat.com Guangwu Zhang
            Richard Megginson Richard Megginson
            Guangwu Zhang Guangwu Zhang
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: