PLAY [Run test tests_lvm_auto_size_cap.yml for nvme] *************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Set disk interface for test] ********************************************* ok: [localhost] PLAY [Test lvm auto size] ****************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Run the role] ************************************************************ TASK [rhel-system-roles.storage : Set platform/version specific variables] ***** included: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/set_vars.yml for localhost TASK [rhel-system-roles.storage : Ensure ansible_facts used by role] *********** skipping: [localhost] TASK [rhel-system-roles.storage : Set platform/version specific variables] ***** skipping: [localhost] => (item=RedHat.yml) skipping: [localhost] => (item=RedHat.yml) ok: [localhost] => (item=RedHat_8.yml) skipping: [localhost] => (item=RedHat_8.10.yml) TASK [rhel-system-roles.storage : Check if system is ostree] ******************* ok: [localhost] TASK [rhel-system-roles.storage : Set flag to indicate system is ostree] ******* ok: [localhost] TASK [rhel-system-roles.storage : Define an empty list of pools to be used in testing] *** ok: [localhost] TASK [rhel-system-roles.storage : Define an empty list of volumes to be used in testing] *** ok: [localhost] TASK [rhel-system-roles.storage : Include the appropriate provider tasks] ****** included: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml for localhost TASK [rhel-system-roles.storage : Make sure blivet is available] *************** ok: [localhost] TASK [rhel-system-roles.storage : Show storage_pools] ************************** skipping: [localhost] TASK [rhel-system-roles.storage : Show storage_volumes] ************************ skipping: [localhost] TASK [rhel-system-roles.storage : Get required packages] *********************** ok: [localhost] TASK [rhel-system-roles.storage : Enable copr repositories if needed] ********** included: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/enable_coprs.yml for localhost TASK [rhel-system-roles.storage : Check if the COPR support packages should be installed] *** skipping: [localhost] TASK [rhel-system-roles.storage : Make sure COPR support packages are present] *** skipping: [localhost] TASK [rhel-system-roles.storage : Enable COPRs] ******************************** skipping: [localhost] TASK [rhel-system-roles.storage : Make sure required packages are installed] *** ok: [localhost] TASK [rhel-system-roles.storage : Get service facts] *************************** ok: [localhost] TASK [rhel-system-roles.storage : Set storage_cryptsetup_services] ************* ok: [localhost] TASK [rhel-system-roles.storage : Mask the systemd cryptsetup services] ******** changed: [localhost] => (item=systemd-cryptsetup@luks\x2d77524a59\x2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) changed: [localhost] => (item=systemd-cryptsetup@luks…2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) TASK [rhel-system-roles.storage : Manage the pools and volumes to match the specified state] *** ok: [localhost] TASK [rhel-system-roles.storage : Workaround for udev issue on some platforms] *** skipping: [localhost] TASK [rhel-system-roles.storage : Unmask the systemd cryptsetup services] ****** changed: [localhost] => (item=systemd-cryptsetup@luks\x2d77524a59\x2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) changed: [localhost] => (item=systemd-cryptsetup@luks…2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) TASK [rhel-system-roles.storage : Show blivet_output] ************************** skipping: [localhost] TASK [rhel-system-roles.storage : Set the list of pools for test verification] *** ok: [localhost] TASK [rhel-system-roles.storage : Set the list of volumes for test verification] *** ok: [localhost] TASK [rhel-system-roles.storage : Remove obsolete mounts] ********************** skipping: [localhost] TASK [rhel-system-roles.storage : Tell systemd to refresh its view of /etc/fstab] *** skipping: [localhost] TASK [rhel-system-roles.storage : Set up new/current mounts] ******************* skipping: [localhost] TASK [rhel-system-roles.storage : Manage mount ownership/permissions] ********** skipping: [localhost] TASK [rhel-system-roles.storage : Tell systemd to refresh its view of /etc/fstab] *** skipping: [localhost] TASK [rhel-system-roles.storage : Retrieve facts for the /etc/crypttab file] *** ok: [localhost] TASK [rhel-system-roles.storage : Manage /etc/crypttab to account for changes we just made] *** skipping: [localhost] TASK [rhel-system-roles.storage : Update facts] ******************************** ok: [localhost] TASK [Mark tasks to be skipped] ************************************************ ok: [localhost] TASK [Get unused disks] ******************************************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/get_unused_disk.yml for localhost TASK [Ensure test packages] **************************************************** ok: [localhost] TASK [Find unused disks in the system] ***************************************** ok: [localhost] TASK [Set unused_disks if necessary] ******************************************* ok: [localhost] TASK [Exit playbook when there's not enough unused disks in the system] ******** skipping: [localhost] TASK [Print unused disks] ****************************************************** ok: [localhost] => { "unused_disks": [ "nvme0n1" ] } TASK [Run lsblk -b -l --noheadings -o NAME,SIZE] ******************************* ok: [localhost] TASK [Set test_disk_size] ****************************************************** ok: [localhost] TASK [Ensure bc is installed] ************************************************** changed: [localhost] TASK [Run bc 2 * 1600321314816] ************************************************ ok: [localhost] TASK [Test handling of too-large LVM volume size] ****************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/verify-role-failed.yml for localhost TASK [Store global variable value copy] **************************************** ok: [localhost] TASK [Verify role raises correct error] **************************************** TASK [rhel-system-roles.storage : Set platform/version specific variables] ***** included: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/set_vars.yml for localhost TASK [rhel-system-roles.storage : Ensure ansible_facts used by role] *********** skipping: [localhost] TASK [rhel-system-roles.storage : Set platform/version specific variables] ***** skipping: [localhost] => (item=RedHat.yml) skipping: [localhost] => (item=RedHat.yml) ok: [localhost] => (item=RedHat_8.yml) skipping: [localhost] => (item=RedHat_8.10.yml) TASK [rhel-system-roles.storage : Check if system is ostree] ******************* skipping: [localhost] TASK [rhel-system-roles.storage : Set flag to indicate system is ostree] ******* skipping: [localhost] TASK [rhel-system-roles.storage : Define an empty list of pools to be used in testing] *** ok: [localhost] TASK [rhel-system-roles.storage : Define an empty list of volumes to be used in testing] *** ok: [localhost] TASK [rhel-system-roles.storage : Include the appropriate provider tasks] ****** included: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml for localhost TASK [rhel-system-roles.storage : Make sure blivet is available] *************** skipping: [localhost] TASK [rhel-system-roles.storage : Show storage_pools] ************************** skipping: [localhost] TASK [rhel-system-roles.storage : Show storage_volumes] ************************ skipping: [localhost] TASK [rhel-system-roles.storage : Get required packages] *********************** skipping: [localhost] TASK [rhel-system-roles.storage : Enable copr repositories if needed] ********** skipping: [localhost] TASK [rhel-system-roles.storage : Make sure required packages are installed] *** skipping: [localhost] TASK [rhel-system-roles.storage : Get service facts] *************************** skipping: [localhost] TASK [rhel-system-roles.storage : Set storage_cryptsetup_services] ************* ok: [localhost] TASK [rhel-system-roles.storage : Mask the systemd cryptsetup services] ******** changed: [localhost] => (item=systemd-cryptsetup@luks\x2d77524a59\x2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) changed: [localhost] => (item=systemd-cryptsetup@luks…2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) TASK [rhel-system-roles.storage : Manage the pools and volumes to match the specified state] *** fatal: [localhost]: FAILED! => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "msg": "specified size for volume 'test1' '2.91 TiB' exceeds available space in pool 'foo' (1.46 TiB)", "packages": [], "pools": [], "volumes": []} TASK [rhel-system-roles.storage : Failed message] ****************************** fatal: [localhost]: FAILED! => {"changed": false, "msg": {"actions": [], "changed": false, "crypts": [], "failed": true, "invocation": {"module_args": {"disklabel_type": null, "diskvolume_mkfs_option_map": {}, "packages_only": false, "pool_defaults": {"disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": []}, "pools": [{"disks": ["nvme0n1"], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "name": "foo", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [{"cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "test1", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "3200642629632", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null}]}], "safe_mode": true, "use_partitions": null, "volume_defaults": {"cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": 0, "state": "present", "thin": null, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null}, "volumes": []}}, "leaves": [], "mounts": [], "msg": "specified size for volume 'test1' '2.91 TiB' exceeds available space in pool 'foo' (1.46 TiB)", "packages": [], "pools": [], "volumes": []}} TASK [rhel-system-roles.storage : Unmask the systemd cryptsetup services] ****** changed: [localhost] => (item=systemd-cryptsetup@luks\x2d77524a59\x2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) changed: [localhost] => (item=systemd-cryptsetup@luks…2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) TASK [Check that we failed in the role] **************************************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Verify the blivet output and error message are correct] ****************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Verify correct exception or error message] ******************************* skipping: [localhost] TASK [Create a pool containing one volume the same size as the backing disk] *** TASK [rhel-system-roles.storage : Set platform/version specific variables] ***** included: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/set_vars.yml for localhost TASK [rhel-system-roles.storage : Ensure ansible_facts used by role] *********** skipping: [localhost] TASK [rhel-system-roles.storage : Set platform/version specific variables] ***** skipping: [localhost] => (item=RedHat.yml) skipping: [localhost] => (item=RedHat.yml) ok: [localhost] => (item=RedHat_8.yml) skipping: [localhost] => (item=RedHat_8.10.yml) TASK [rhel-system-roles.storage : Check if system is ostree] ******************* skipping: [localhost] TASK [rhel-system-roles.storage : Set flag to indicate system is ostree] ******* skipping: [localhost] TASK [rhel-system-roles.storage : Define an empty list of pools to be used in testing] *** ok: [localhost] TASK [rhel-system-roles.storage : Define an empty list of volumes to be used in testing] *** ok: [localhost] TASK [rhel-system-roles.storage : Include the appropriate provider tasks] ****** included: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml for localhost TASK [rhel-system-roles.storage : Make sure blivet is available] *************** skipping: [localhost] TASK [rhel-system-roles.storage : Show storage_pools] ************************** skipping: [localhost] TASK [rhel-system-roles.storage : Show storage_volumes] ************************ skipping: [localhost] TASK [rhel-system-roles.storage : Get required packages] *********************** skipping: [localhost] TASK [rhel-system-roles.storage : Enable copr repositories if needed] ********** skipping: [localhost] TASK [rhel-system-roles.storage : Make sure required packages are installed] *** skipping: [localhost] TASK [rhel-system-roles.storage : Get service facts] *************************** skipping: [localhost] TASK [rhel-system-roles.storage : Set storage_cryptsetup_services] ************* ok: [localhost] TASK [rhel-system-roles.storage : Mask the systemd cryptsetup services] ******** changed: [localhost] => (item=systemd-cryptsetup@luks\x2d77524a59\x2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) changed: [localhost] => (item=systemd-cryptsetup@luks…2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) TASK [rhel-system-roles.storage : Manage the pools and volumes to match the specified state] *** changed: [localhost] TASK [rhel-system-roles.storage : Workaround for udev issue on some platforms] *** skipping: [localhost] TASK [rhel-system-roles.storage : Unmask the systemd cryptsetup services] ****** changed: [localhost] => (item=systemd-cryptsetup@luks\x2d77524a59\x2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) changed: [localhost] => (item=systemd-cryptsetup@luks…2de491\x2d400b\x2da376\x2d43ac5ddda7f0.service) TASK [rhel-system-roles.storage : Show blivet_output] ************************** skipping: [localhost] TASK [rhel-system-roles.storage : Set the list of pools for test verification] *** ok: [localhost] TASK [rhel-system-roles.storage : Set the list of volumes for test verification] *** ok: [localhost] TASK [rhel-system-roles.storage : Remove obsolete mounts] ********************** skipping: [localhost] TASK [rhel-system-roles.storage : Tell systemd to refresh its view of /etc/fstab] *** skipping: [localhost] TASK [rhel-system-roles.storage : Set up new/current mounts] ******************* skipping: [localhost] TASK [rhel-system-roles.storage : Manage mount ownership/permissions] ********** skipping: [localhost] TASK [rhel-system-roles.storage : Tell systemd to refresh its view of /etc/fstab] *** skipping: [localhost] TASK [rhel-system-roles.storage : Retrieve facts for the /etc/crypttab file] *** ok: [localhost] TASK [rhel-system-roles.storage : Manage /etc/crypttab to account for changes we just made] *** skipping: [localhost] TASK [rhel-system-roles.storage : Update facts] ******************************** ok: [localhost] TASK [Verify role results] ***************************************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/verify-role-results.yml for localhost TASK [Print out pool information] ********************************************** ok: [localhost] => { "_storage_pools_list": [ { "disks": [ "nvme0n1" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "name": "foo", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/foo-test1", "_kernel_device": "/dev/dm-3", "_mount_id": "/dev/mapper/foo-test1", "_raw_device": "/dev/mapper/foo-test1", "_raw_kernel_device": "/dev/dm-3", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "test1", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "1600321314816", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ] } TASK [Print out volume information] ******************************************** skipping: [localhost] TASK [Collect info about the volumes.] ***************************************** ok: [localhost] TASK [Read the /etc/fstab file for volume existence] *************************** ok: [localhost] TASK [Read the /etc/crypttab file] ********************************************* ok: [localhost] TASK [Verify the volumes listed in storage_pools were correctly managed] ******* included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-pool.yml for localhost => (item={'disks': ['nvme0n1'], 'encryption': False, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'name': 'foo', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_metadata_version': None, 'raid_chunk_size': None, 'shared': False, 'state': 'present', 'type': 'lvm', 'volumes': [{'encryption': False, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': '', 'fs_label': '', 'fs_type': 'xfs', 'mount_options': 'defaults', 'mount_point': '', 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'name': 'test1', 'raid_level': None, 'size': '1600321314816', 'state': 'present', 'type': 'lvm', 'cached': False, 'cache_devices': [], 'cache_mode': None, 'cache_size': 0, 'compression': None, 'deduplication': None, 'raid_disks': [], 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'thin': False, 'vdo_pool_size': None, 'disks': [], 'fs_overwrite_existing': True, 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, '_device': '/dev/mapper/foo-test1', '_raw_device': '/dev/mapper/foo-test1', '_mount_id': '/dev/mapper/foo-test1', '_kernel_device': '/dev/dm-3', '_raw_kernel_device': '/dev/dm-3'}]}) TASK [Set _storage_pool_tests] ************************************************* ok: [localhost] TASK [Get VG shared value status] ********************************************** ok: [localhost] TASK [Verify that VG shared value checks out] ********************************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Verify pool subset] ****************************************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-pool-members.yml for localhost => (item=members) included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-pool-volumes.yml for localhost => (item=volumes) TASK [Set test variables] ****************************************************** ok: [localhost] TASK [Get the canonical device path for each member device] ******************** ok: [localhost] => (item=/dev/nvme0n1) TASK [Set pvs lvm length] ****************************************************** ok: [localhost] TASK [Set pool pvs] ************************************************************ ok: [localhost] TASK [Verify PV count] ********************************************************* ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Set expected pv type] **************************************************** ok: [localhost] TASK [Set expected pv type] **************************************************** ok: [localhost] TASK [Set expected pv type] **************************************************** skipping: [localhost] TASK [Check the type of each PV] *********************************************** ok: [localhost] => (item=/dev/nvme0n1) => { "ansible_loop_var": "pv", "changed": false, "msg": "All assertions passed", "pv": "/dev/nvme0n1" } TASK [Check MD RAID] *********************************************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/verify-pool-md.yml for localhost TASK [Get information about RAID] ********************************************** skipping: [localhost] TASK [Set active devices regex] ************************************************ skipping: [localhost] TASK [Set spare devices regex] ************************************************* skipping: [localhost] TASK [Set md version regex] **************************************************** skipping: [localhost] TASK [Set md chunk size regex] ************************************************* skipping: [localhost] TASK [Parse the chunk size] **************************************************** skipping: [localhost] TASK [Check RAID active devices count] ***************************************** skipping: [localhost] TASK [Check RAID spare devices count] ****************************************** skipping: [localhost] TASK [Check RAID metadata version] ********************************************* skipping: [localhost] TASK [Check RAID chunk size] *************************************************** skipping: [localhost] TASK [Reset variables used by tests] ******************************************* ok: [localhost] TASK [Check LVM RAID] ********************************************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/verify-pool-members-lvmraid.yml for localhost TASK [Validate pool member LVM RAID settings] ********************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/verify-pool-member-lvmraid.yml for localhost => (item={'encryption': False, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': '', 'fs_label': '', 'fs_type': 'xfs', 'mount_options': 'defaults', 'mount_point': '', 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'name': 'test1', 'raid_level': None, 'size': '1600321314816', 'state': 'present', 'type': 'lvm', 'cached': False, 'cache_devices': [], 'cache_mode': None, 'cache_size': 0, 'compression': None, 'deduplication': None, 'raid_disks': [], 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'thin': False, 'vdo_pool_size': None, 'disks': [], 'fs_overwrite_existing': True, 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, '_device': '/dev/mapper/foo-test1', '_raw_device': '/dev/mapper/foo-test1', '_mount_id': '/dev/mapper/foo-test1', '_kernel_device': '/dev/dm-3', '_raw_kernel_device': '/dev/dm-3'}) TASK [Get information about the LV] ******************************************** skipping: [localhost] TASK [Set LV segment type] ***************************************************** skipping: [localhost] TASK [Check segment type] ****************************************************** skipping: [localhost] TASK [Set LV stripe size] ****************************************************** skipping: [localhost] TASK [Parse the requested stripe size] ***************************************** skipping: [localhost] TASK [Set expected stripe size] ************************************************ skipping: [localhost] TASK [Check stripe size] ******************************************************* skipping: [localhost] TASK [Check Thin Pools] ******************************************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/verify-pool-members-thin.yml for localhost TASK [Validate pool member thinpool settings] ********************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/verify-pool-member-thin.yml for localhost => (item={'encryption': False, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': '', 'fs_label': '', 'fs_type': 'xfs', 'mount_options': 'defaults', 'mount_point': '', 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'name': 'test1', 'raid_level': None, 'size': '1600321314816', 'state': 'present', 'type': 'lvm', 'cached': False, 'cache_devices': [], 'cache_mode': None, 'cache_size': 0, 'compression': None, 'deduplication': None, 'raid_disks': [], 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'thin': False, 'vdo_pool_size': None, 'disks': [], 'fs_overwrite_existing': True, 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, '_device': '/dev/mapper/foo-test1', '_raw_device': '/dev/mapper/foo-test1', '_mount_id': '/dev/mapper/foo-test1', '_kernel_device': '/dev/dm-3', '_raw_kernel_device': '/dev/dm-3'}) TASK [Get information about thinpool] ****************************************** skipping: [localhost] TASK [Check that volume is in correct thinpool (when thinp name is provided)] *** skipping: [localhost] TASK [Check that volume is in thinpool (when thinp name is not provided)] ****** skipping: [localhost] TASK [Reset variable used by test] ********************************************* ok: [localhost] TASK [Check member encryption] ************************************************* included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/verify-pool-members-encryption.yml for localhost TASK [Set test variables] ****************************************************** ok: [localhost] TASK [Validate pool member LUKS settings] ************************************** skipping: [localhost] => (item=/dev/nvme0n1) skipping: [localhost] TASK [Validate pool member crypttab entries] *********************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/verify-pool-member-crypttab.yml for localhost => (item=/dev/nvme0n1) TASK [Set variables used by tests] ********************************************* ok: [localhost] TASK [Check for /etc/crypttab entry] ******************************************* ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Validate the format of the crypttab entry] ******************************* skipping: [localhost] TASK [Check backing device of crypttab entry] ********************************** skipping: [localhost] TASK [Check key file of crypttab entry] **************************************** skipping: [localhost] TASK [Clear test variables] **************************************************** ok: [localhost] TASK [Clear test variables] **************************************************** ok: [localhost] TASK [Check VDO] *************************************************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/verify-pool-members-vdo.yml for localhost TASK [Validate pool member VDO settings] *************************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/verify-pool-member-vdo.yml for localhost => (item={'encryption': False, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': '', 'fs_label': '', 'fs_type': 'xfs', 'mount_options': 'defaults', 'mount_point': '', 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'name': 'test1', 'raid_level': None, 'size': '1600321314816', 'state': 'present', 'type': 'lvm', 'cached': False, 'cache_devices': [], 'cache_mode': None, 'cache_size': 0, 'compression': None, 'deduplication': None, 'raid_disks': [], 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'thin': False, 'vdo_pool_size': None, 'disks': [], 'fs_overwrite_existing': True, 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, '_device': '/dev/mapper/foo-test1', '_raw_device': '/dev/mapper/foo-test1', '_mount_id': '/dev/mapper/foo-test1', '_kernel_device': '/dev/dm-3', '_raw_kernel_device': '/dev/dm-3'}) TASK [Get information about VDO deduplication] ********************************* skipping: [localhost] TASK [Check if VDO deduplication is off] *************************************** skipping: [localhost] TASK [Check if VDO deduplication is on] **************************************** skipping: [localhost] TASK [Get information about VDO compression] *********************************** skipping: [localhost] TASK [Check if VDO deduplication is off] *************************************** skipping: [localhost] TASK [Check if VDO deduplication is on] **************************************** skipping: [localhost] TASK [Clear test variables] **************************************************** ok: [localhost] TASK [Clean up test variables] ************************************************* ok: [localhost] TASK [Verify the volumes] ****************************************************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-volume.yml for localhost => (item={'encryption': False, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': '', 'fs_label': '', 'fs_type': 'xfs', 'mount_options': 'defaults', 'mount_point': '', 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'name': 'test1', 'raid_level': None, 'size': '1600321314816', 'state': 'present', 'type': 'lvm', 'cached': False, 'cache_devices': [], 'cache_mode': None, 'cache_size': 0, 'compression': None, 'deduplication': None, 'raid_disks': [], 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'thin': False, 'vdo_pool_size': None, 'disks': [], 'fs_overwrite_existing': True, 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, '_device': '/dev/mapper/foo-test1', '_raw_device': '/dev/mapper/foo-test1', '_mount_id': '/dev/mapper/foo-test1', '_kernel_device': '/dev/dm-3', '_raw_kernel_device': '/dev/dm-3'}) TASK [Set storage volume test variables] *************************************** ok: [localhost] TASK [Run test verify for {{ storage_test_volume_subset }}] ******************** included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-volume-mount.yml for localhost => (item=mount) included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-volume-fstab.yml for localhost => (item=fstab) included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-volume-fs.yml for localhost => (item=fs) included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-volume-device.yml for localhost => (item=device) included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-volume-encryption.yml for localhost => (item=encryption) included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-volume-md.yml for localhost => (item=md) included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-volume-size.yml for localhost => (item=size) included: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-volume-cache.yml for localhost => (item=cache) TASK [Get expected mount device based on device type] ************************** ok: [localhost] TASK [Set some facts] ********************************************************** ok: [localhost] TASK [Get information about the mountpoint directory] ************************** skipping: [localhost] TASK [Verify the current mount state by device] ******************************** skipping: [localhost] TASK [Verify the current mount state by mount point] *************************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Verify mount directory user] ********************************************* skipping: [localhost] TASK [Verify mount directory group] ******************************************** skipping: [localhost] TASK [Verify mount directory permissions] ************************************** skipping: [localhost] TASK [Verify the mount fs type] ************************************************ skipping: [localhost] TASK [Get path of test volume device] ****************************************** skipping: [localhost] TASK [Gather swap info] ******************************************************** skipping: [localhost] TASK [Verify swap status] ****************************************************** skipping: [localhost] TASK [Unset facts] ************************************************************* ok: [localhost] TASK [Set some variables for fstab checking] *********************************** ok: [localhost] TASK [Verify that the device identifier appears in /etc/fstab] ***************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Verify the fstab mount point] ******************************************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Verify mount_options] **************************************************** skipping: [localhost] TASK [Clean up variables] ****************************************************** ok: [localhost] TASK [Verify fs type] ********************************************************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Verify fs label] ********************************************************* ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [See whether the device node is present] ********************************** ok: [localhost] TASK [Verify the presence/absence of the device node] ************************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Verify the presence/absence of the device node] ************************** skipping: [localhost] TASK [Make sure we got info about this volume] ********************************* ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Process volume type (set initial value) (1/2)] *************************** ok: [localhost] TASK [Process volume type (get RAID value) (2/2)] ****************************** skipping: [localhost] TASK [Verify the volume's device type] ***************************************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Stat the LUKS device, if encrypted] ************************************** skipping: [localhost] TASK [Ensure cryptsetup is present] ******************************************** ok: [localhost] TASK [Collect LUKS info for this volume] *************************************** skipping: [localhost] TASK [Verify the presence/absence of the LUKS device node] ********************* skipping: [localhost] TASK [Verify that the raw device is the same as the device if not encrypted] *** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Make sure we got info about the LUKS volume if encrypted] **************** skipping: [localhost] TASK [Verify the LUKS volume's device type if encrypted] *********************** skipping: [localhost] TASK [Check LUKS version] ****************************************************** skipping: [localhost] TASK [Check LUKS key size] ***************************************************** skipping: [localhost] TASK [Check LUKS cipher] ******************************************************* skipping: [localhost] TASK [Set test variables] ****************************************************** ok: [localhost] TASK [Check for /etc/crypttab entry] ******************************************* ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Validate the format of the crypttab entry] ******************************* skipping: [localhost] TASK [Check backing device of crypttab entry] ********************************** skipping: [localhost] TASK [Check key file of crypttab entry] **************************************** skipping: [localhost] TASK [Clear test variables] **************************************************** ok: [localhost] TASK [Get information about RAID] ********************************************** skipping: [localhost] TASK [Set active devices regex] ************************************************ skipping: [localhost] TASK [Set spare devices regex] ************************************************* skipping: [localhost] TASK [Set md version regex] **************************************************** skipping: [localhost] TASK [Set chunk size regex] **************************************************** skipping: [localhost] TASK [Parse the chunk size] **************************************************** skipping: [localhost] TASK [Check RAID active devices count] ***************************************** skipping: [localhost] TASK [Check RAID spare devices count] ****************************************** skipping: [localhost] TASK [Check RAID metadata version] ********************************************* skipping: [localhost] TASK [Check RAID chunk size] *************************************************** skipping: [localhost] TASK [Parse the actual size of the volume] ************************************* ok: [localhost] TASK [Parse the requested size of the volume] ********************************** ok: [localhost] TASK [Establish base value for expected size] ********************************** ok: [localhost] TASK [Show expected size] ****************************************************** ok: [localhost] => { "storage_test_expected_size": "1600321314816" } TASK [Get the size of parent/pool device] ************************************** ok: [localhost] TASK [Show test pool] ********************************************************** skipping: [localhost] TASK [Show test blockinfo] ***************************************************** skipping: [localhost] TASK [Show test pool size] ***************************************************** skipping: [localhost] TASK [Calculate the expected size based on pool size and percentage value] ***** skipping: [localhost] TASK [Default thin pool reserved space values] ********************************* skipping: [localhost] TASK [Default minimal thin pool reserved space size] *************************** skipping: [localhost] TASK [Default maximal thin pool reserved space size] *************************** skipping: [localhost] TASK [Calculate maximum usable space in thin pool] ***************************** skipping: [localhost] TASK [Apply upper size limit to max usable thin pool space] ******************** skipping: [localhost] TASK [Apply lower size limit to max usable thin pool space] ******************** skipping: [localhost] TASK [Convert maximum usable thin pool space from int to Size] ***************** skipping: [localhost] TASK [Show max thin pool size] ************************************************* skipping: [localhost] TASK [Show volume thin pool size] ********************************************** skipping: [localhost] TASK [Show test volume size] *************************************************** skipping: [localhost] TASK [Establish base value for expected thin pool size] ************************ skipping: [localhost] TASK [Calculate the expected size based on pool size and percentage value] ***** skipping: [localhost] TASK [Establish base value for expected thin pool volume size] ***************** skipping: [localhost] TASK [Calculate the expected thin pool volume size based on percentage value] *** skipping: [localhost] TASK [Replace expected volume size with calculated value] ********************** skipping: [localhost] TASK [Show actual size] ******************************************************** ok: [localhost] => { "storage_test_actual_size": { "bytes": 1649267441664, "changed": false, "failed": false, "lvm": "1t", "parted": "1TiB", "size": "1 TiB" } } TASK [Show expected size] ****************************************************** ok: [localhost] => { "storage_test_expected_size": "1600321314816" } TASK [Assert expected size is actual size] ************************************* fatal: [localhost]: FAILED! => { "assertion": "(storage_test_expected_size | int - storage_test_actual_size.bytes) | abs / storage_test_expected_size | int < 0.02", "changed": false, "evaluated_to": false, "msg": "Volume test1 has unexpected size (expected: 1600321314816 / actual: 1649267441664)" } PLAY RECAP ********************************************************************* localhost : ok=132 changed=8 unreachable=0 failed=1 skipped=128 rescued=2 ignored=0