Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-6984

Can not recover a host since disk layout recreation script fails

    • Icon: Bug Bug
    • Resolution: Done-Errata
    • Icon: Undefined Undefined
    • rhel-9.4
    • rhel-9.1.0
    • rear
    • rear-2.6-20.el9
    • None
    • Important
    • ZStream
    • rhel-sst-cs-system-management
    • ssg_core_services
    • 14
    • 23
    • 8
    • False
    • Hide

      None

      Show
      None
    • Yes
    • None
    • Approved Blocker
    • Bug Fix
    • Hide
      .ReaR recovery no longer fails on systems with a small thin pool metadata size

      Previously, ReaR did not save the size of the pool metadata volume when saving a layout of an LVM volume group with a thin pool. During recovery, ReaR recreated the pool with the default size even if the system used a non-default pool metadata size.

      As a consequence, when the original pool metadata size was smaller than the default size and no free space was available in the volume group, the layout recreation during system recovery failed with a message in the log similar to these examples:

      ----
      Insufficient free space: 230210 extents needed, but only 230026 available
      ----
      or
      ----
      Volume group "vg" has insufficient free space (16219 extents): 16226 required.
      ----

      With this update, the recovered system has a metadata volume with the same size as the original system. As a result, the recovery of a system with a small thin pool metadata size and no extra free space in the volume group finishes successfully.
      Show
      .ReaR recovery no longer fails on systems with a small thin pool metadata size Previously, ReaR did not save the size of the pool metadata volume when saving a layout of an LVM volume group with a thin pool. During recovery, ReaR recreated the pool with the default size even if the system used a non-default pool metadata size. As a consequence, when the original pool metadata size was smaller than the default size and no free space was available in the volume group, the layout recreation during system recovery failed with a message in the log similar to these examples: ---- Insufficient free space: 230210 extents needed, but only 230026 available ---- or ---- Volume group "vg" has insufficient free space (16219 extents): 16226 required. ---- With this update, the recovered system has a metadata volume with the same size as the original system. As a result, the recovery of a system with a small thin pool metadata size and no extra free space in the volume group finishes successfully.
    • Done
    • None

      Description of problem:

      Can not recover a host from backup iso. On attempt to recover getting a message "disk layout recreation script failed"

      From /var/log/rear/rear-controller-0.log

      2022-12-20 13:19:59.132236169 Creating LVM volume 'vg/lv_thinpool'; Warning: some properties may not be preserved...
      +++ Print 'Creating LVM volume '\''vg/lv_thinpool'\''; Warning: some properties may not be preserved...'
      +++ lvm lvcreate -y --chunksize 65536b --type thin-pool -L 68056776704b --thinpool lv_thinpool vg
      Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data.
      Volume group "vg" has insufficient free space (16219 extents): 16226 required.

      ....

      +++ LogPrint 'Creating filesystem of type xfs with mount point / on /dev/mapper/vg-lv_root.'
      +++ Log 'Creating filesystem of type xfs with mount point / on /dev/mapper/vg-lv_root.'
      +++ echo '2022-12-20 13:38:29.466548728 Creating filesystem of type xfs with mount point / on /dev/mapper/vg-lv_root.'
      2022-12-20 13:38:29.466548728 Creating filesystem of type xfs with mount point / on /dev/mapper/vg-lv_root.
      +++ Print 'Creating filesystem of type xfs with mount point / on /dev/mapper/vg-lv_root.'
      +++ wipefs --all --force /dev/mapper/vg-lv_root
      +++ mkfs.xfs -f -m uuid=1cf3d69c-7dfe-40ab-b6a7-e6110912489e -i size=512 -d agcount=28 -s size=512 -i attr=2 -i projid32bit=1 -m crc=1 -m finobt=1 -b size=4096 -i maxpct=25 -d sunit=128 -d swidth=128 -l version=2 -l sunit=128 -l lazy-count=1 -n size=4096 -n version=2 -r extsize=4096 /dev/mapper/vg-lv_root
      mkfs.xfs: xfs_mkfs.c:2703: validate_datadev: Assertion `cfg->dblocks' failed.
      /var/lib/rear/layout/diskrestore.sh: line 323: 4142 Aborted (core dumped) mkfs.xfs -f -m uuid=1cf3d69c-7dfe-40ab-b6a7-e6110912489e -i size=512 -d agcount=28 -s size=512 -i attr=2 -i projid32bit=1 -m crc=1 -m finobt=1 -b size=4096 -i maxpct=25 -d sunit=128 -d swidth=128 -l version=2 -l sunit=128 -l lazy-count=1 -n size=4096 -n version=2 -r extsize=4096 /dev/mapper/vg-lv_root 1>&2
      +++ mkfs.xfs -f -i size=512 -d agcount=28 -s size=512 -i attr=2 -i projid32bit=1 -m crc=1 -m finobt=1 -b size=4096 -i maxpct=25 -d sunit=128 -d swidth=128 -l version=2 -l sunit=128 -l lazy-count=1 -n size=4096 -n version=2 -r extsize=4096 /dev/mapper/vg-lv_root
      mkfs.xfs: xfs_mkfs.c:2703: validate_datadev: Assertion `cfg->dblocks' failed.
      /var/lib/rear/layout/diskrestore.sh: line 323: 4144 Aborted (core dumped) mkfs.xfs -f -i size=512 -d agcount=28 -s size=512 -i attr=2 -i projid32bit=1 -m crc=1 -m finobt=1 -b size=4096 -i maxpct=25 -d sunit=128 -d swidth=128 -l version=2 -l sunit=128 -l lazy-count=1 -n size=4096 -n version=2 -r extsize=4096 /dev/mapper/vg-lv_root 1>&2

      Version-Release number of selected component (if applicable):
      Relax-and-Recover 2.6 / 2020-06-17
      Red Hat Enterprise Linux release 9.1 (Plow)
      Host is a KVM virtual machine with UEFI, os section
      <os>
      <type arch='x86_64' machine='pc-q35-rhel7.6.0'>hvm</type>
      <loader readonly='yes' secure='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
      <nvram>/var/lib/libvirt/qemu/nvram/controller-0_VARS.fd</nvram>
      <boot dev='hd'/>
      </os>

      How reproducible:
      100%

      Steps to Reproduce:
      1. Backup a host
      2. Try to recover the host from the backup

      Actual results:
      Recovery fails complaining that disk layout recreation script failed

      Expected results:
      Recovery completed successfully

      Additional info:
      local.conf
      export TMPDIR="${TMPDIR-/var/tmp}"
      ISO_DEFAULT="automatic"
      OUTPUT=ISO
      BACKUP=NETFS
      BACKUP_PROG_COMPRESS_OPTIONS=( --gzip)
      BACKUP_PROG_COMPRESS_SUFFIX=".gz"
      OUTPUT_URL=nfs://192.168.24.1/ctl_plane_backups
      ISO_PREFIX=$HOSTNAME-202212201022
      BACKUP_URL=nfs://192.168.24.1/ctl_plane_backups
      BACKUP_PROG_CRYPT_ENABLED=False
      BACKUP_PROG_OPTIONS+=( --anchored --xattrs-include='.' --xattrs )
      BACKUP_PROG_EXCLUDE=( '/data/' '/tmp/' '/ctl_plane_backups/*' )
      EXCLUDE_RECREATE+=( "/dev/cinder-volumes" )
      USING_UEFI_BOOTLOADER=1
      LOGFILE="$LOG_DIR/rear-$HOSTNAME-202212201022.log"

      [cloud-admin@controller-0 ~]$ lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT
      NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL SIZE MOUNTPOINT
      /dev/loop0 /dev/loop0 loop LVM2_member 20.1G
      /dev/vda /dev/vda disk 64G

      -/dev/vda1 /dev/vda1 /dev/vda part vfat MKFS_ESP 16M /boot/efi
      -/dev/vda2 /dev/vda2 /dev/vda part 8M
      -/dev/vda3 /dev/vda3 /dev/vda part ext4 mkfs_boot 500M /boot
      -/dev/vda4 /dev/vda4 /dev/vda part LVM2_member 5G
        -/dev/mapper/vg-lv_thinpool_tmeta /dev/dm-0 /dev/vda4 lvm 8M
        `-/dev/mapper/vg-lv_thinpool-tpool /dev/dm-2 /dev/dm-0 lvm 63.4G
          -/dev/mapper/vg-lv_thinpool /dev/dm-3 /dev/dm-2 lvm 63.4G
          -/dev/mapper/vg-lv_root /dev/dm-4 /dev/dm-2 lvm xfs img-rootfs 10.5G /
          -/dev/mapper/vg-lv_tmp /dev/dm-5 /dev/dm-2 lvm xfs fs_tmp 1.2G /tmp
          -/dev/mapper/vg-lv_var /dev/dm-6 /dev/dm-2 lvm xfs fs_var 37G /var
          -/dev/mapper/vg-lv_log /dev/dm-7 /dev/dm-2 lvm xfs fs_log 3G /var/log
          -/dev/mapper/vg-lv_audit /dev/dm-8 /dev/dm-2 lvm xfs fs_audit 1.1G /var/log/audit
          -/dev/mapper/vg-lv_home /dev/dm-9 /dev/dm-2 lvm xfs fs_home 1.2G /home
        `-/dev/mapper/vg-lv_srv /dev/dm-10 /dev/dm-2 lvm xfs fs_srv 9.4G /srv
      `-/dev/mapper/vg-lv_thinpool_tdata /dev/dm-1 /dev/vda4 lvm 63.4G
      `-/dev/mapper/vg-lv_thinpool-tpool /dev/dm-2 /dev/dm-1 lvm 63.4G
        -/dev/mapper/vg-lv_thinpool /dev/dm-3 /dev/dm-2 lvm 63.4G
        -/dev/mapper/vg-lv_root /dev/dm-4 /dev/dm-2 lvm xfs img-rootfs 10.5G /
        -/dev/mapper/vg-lv_tmp /dev/dm-5 /dev/dm-2 lvm xfs fs_tmp 1.2G /tmp
        -/dev/mapper/vg-lv_var /dev/dm-6 /dev/dm-2 lvm xfs fs_var 37G /var
        -/dev/mapper/vg-lv_log /dev/dm-7 /dev/dm-2 lvm xfs fs_log 3G /var/log
        -/dev/mapper/vg-lv_audit /dev/dm-8 /dev/dm-2 lvm xfs fs_audit 1.1G /var/log/audit
        -/dev/mapper/vg-lv_home /dev/dm-9 /dev/dm-2 lvm xfs fs_home 1.2G /home
      `-/dev/mapper/vg-lv_srv /dev/dm-10 /dev/dm-2 lvm xfs fs_srv 9.4G /srv
      -/dev/vda5 /dev/vda5 /dev/vda part iso9660 config-2 65M
      `-/dev/vda6 /dev/vda6 /dev/vda part LVM2_member 58.5G
      `-/dev/mapper/vg-lv_thinpool_tdata /dev/dm-1 /dev/vda6 lvm 63.4G
      `-/dev/mapper/vg-lv_thinpool-tpool /dev/dm-2 /dev/dm-1 lvm 63.4G
      -/dev/mapper/vg-lv_thinpool /dev/dm-3 /dev/dm-2 lvm 63.4G
      -/dev/mapper/vg-lv_root /dev/dm-4 /dev/dm-2 lvm xfs img-rootfs 10.5G /
      -/dev/mapper/vg-lv_tmp /dev/dm-5 /dev/dm-2 lvm xfs fs_tmp 1.2G /tmp
      -/dev/mapper/vg-lv_var /dev/dm-6 /dev/dm-2 lvm xfs fs_var 37G /var
      -/dev/mapper/vg-lv_log /dev/dm-7 /dev/dm-2 lvm xfs fs_log 3G /var/log
      -/dev/mapper/vg-lv_audit /dev/dm-8 /dev/dm-2 lvm xfs fs_audit 1.1G /var/log/audit
      -/dev/mapper/vg-lv_home /dev/dm-9 /dev/dm-2 lvm xfs fs_home 1.2G /home
      `-/dev/mapper/vg-lv_srv /dev/dm-10 /dev/dm-2 lvm xfs fs_srv 9.4G /srv

      The issue was found during openstack control plane nodes backup and recovery, link to procedure:
      https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.0/html/backing_up_and_restoring_the_undercloud_and_control_plane_nodes/assembly_backing-up-the-control-plane-nodes_br-undercloud-ctlplane#proc_creating-a-backup-of-the-control-plane-nodes_backup-ctlplane

      mkdir /tmp/backup-recover-temp/
      cp ./overcloud-deploy/overcloud/config-download/overcloud/tripleo-ansible-inventory.yaml /tmp/backup-recover-temp/tripleo-inventory.yaml

      source /home/stack/stackrc
      openstack overcloud backup --inventory /tmp/backup-recover-temp/tripleo-inventory.yaml --setup-nfs --extra-vars '

      {"tripleo_backup_and_restore_server": 192.168.24.1,"nfs_server_group_name": Undercloud}

      '

      openstack overcloud backup --inventory /tmp/backup-recover-temp/tripleo-inventory.yaml --setup-rear --extra-vars '

      {"tripleo_backup_and_restore_server": 192.168.24.1}

      '

      openstack overcloud backup --inventory /tmp/backup-recover-temp/tripleo-inventory.yaml

              rhn-support-pcahyna Pavel Cahyna
              romansaf Roman Safronov
              Pavel Cahyna Pavel Cahyna
              Jakub Haruda Jakub Haruda
              Mugdha Soni Mugdha Soni
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

                Created:
                Updated:
                Resolved: