Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-60959

Online reencryption may run with block device file descriptor open without (essential) O_DIRECT flag.

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Undefined Undefined
    • None
    • rhel-10.0.beta
    • cryptsetup
    • None
    • No
    • None
    • rhel-sst-logical-storage
    • ssg_filesystems_storage_and_HA
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • x86_64
    • None

      This is the RHEL10-beta version of bug https://issues.redhat.com/browse/RHEL-41238.

      kernel-6.11.0-0.rc5.22.el10    BUILT: Tue Aug 27 05:47:28 PM EDT 2024
      lvm2-2.03.24-4.el10    BUILT: Wed Sep 25 05:55:55 AM EDT 2024
      lvm2-libs-2.03.24-4.el10    BUILT: Wed Sep 25 05:55:55 AM EDT 2024
      cryptsetup-2.7.3-2.el10    BUILT: Mon Jun 24 12:05:26 PM EDT 2024
      cryptsetup-libs-2.7.3-2.el10    BUILT: Mon Jun 24 12:05:26 PM EDT 2024
       
       
      SCENARIO - [header_online_reencryption_of_cached_origin_volumes]
      Create snapshot of encrypted luks origin volume using header, and then verify that data on a snapshot after reencryptions (repo RHEL-41238)
       
      *** Cache info for this scenario ***
      *  origin (slow):  /dev/sda1
      *  pool (fast):    /dev/sdf1
      ************************************
       
      Adding "slow" and "fast" tags to corresponding pvs
      pvchange --addtag slow /dev/sda1
      pvchange --addtag fast /dev/sdf1
      Create origin (slow) volume
      lvcreate --yes --wipesignatures y  -L 4G -n corigin cache_sanity @slow
       
      Create cache data and cache metadata (fast) volumes
      lvcreate --yes  -L 2G -n fs_A_pool cache_sanity @fast
      lvcreate --yes  -L 12M -n fs_A_pool_meta cache_sanity @fast
       
      Create cache pool volume by combining the cache data and cache metadata (fast) volumes with policy: smq  mode: passthrough
      lvconvert --yes --type cache-pool --cachepolicy smq --cachemode passthrough -c 64 --poolmetadata cache_sanity/fs_A_pool_meta cache_sanity/fs_A_pool
        WARNING: Converting cache_sanity/fs_A_pool and cache_sanity/fs_A_pool_meta to cache pool's data and metadata volumes with metadata wiping.
        THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
      Create cached volume by combining the cache pool (fast) and origin (slow) volumes
      lvconvert --yes --type cache --cachemetadataformat 1 --cachepool cache_sanity/fs_A_pool cache_sanity/corigin
       
       
      Encrypting corigin volume
      cryptsetup reencrypt --encrypt --init-only /dev/cache_sanity/corigin --header /tmp/fs_io_writes_luks_header.155454
      cryptsetup reencrypt /dev/cache_sanity/corigin --header /tmp/fs_io_writes_luks_header.155454
      cryptsetup luksOpen /dev/cache_sanity/corigin luks_corigin --header /tmp/fs_io_writes_luks_header.155454
       
      Placing an xfs filesystem on origin volume
      warning: device is not properly aligned /dev/mapper/luks_corigin
      Mounting origin volume
      Writing files to /mnt/corigin
      Checking files on /mnt/corigin
       
      Running 'sync' before reencryption...
       
      (ONLINE) Re-encrypting corigin volume
      cryptsetup reencrypt --resilience checksum --active-name luks_corigin --header /tmp/fs_io_writes_luks_header.155454
       
      Running 'sync' before snap creation...
       
      Making snapshot of origin volume
      lvcreate --yes  -s /dev/cache_sanity/corigin -c 128 -n fs_snap -L 4296704
      copying header for use with snap volume: 'cp /tmp/fs_io_writes_luks_header.155454 /tmp/snap_fs_io_writes_luks_header.155454'
      cryptsetup luksOpen /dev/cache_sanity/fs_snap luks_fs_snap --header /tmp/snap_fs_io_writes_luks_header.155454
       
      Mounting snap volume
      mount -o nouuid /dev/mapper/luks_fs_snap /mnt/fs_snap
      mount: /mnt/fs_snap: can't read superblock on /dev/mapper/luks_fs_snap.
             dmesg(1) may have more information after failed mount system call.
      couldn't mount fs snap on /mnt/fs_snap
      
      
      Sep 30 12:47:24 grant-03 qarshd[5393]: Running cmdline: mount -o nouuid /dev/mapper/luks_fs_snap /mnt/fs_snap
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): Mounting V5 Filesystem 619a1074-8b1f-445f-a825-54f4024b34d7
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): Starting recovery (logdev: internal)
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): Corruption warning: Metadata has LSN (1853108512:2064282484) ahead of current LSN (2:23552). Please unmount and run xfs_repair (>= v4.3) to resolve.
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): Metadata CRC error detected at xfs_refcountbt_read_verify+0x16/0xc0 [xfs], xfs_refcountbt block 0x30 
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): Unmount and run xfs_repair
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): First 128 bytes of corrupted metadata buffer:
      Sep 30 12:47:24 grant-03 kernel: 00000000: 73 65 67 6d 65 6e 74 5f 63 6f 75 6e 74 20 3d 20  segment_count =
      Sep 30 12:47:24 grant-03 kernel: 00000010: 31 0a 0a 73 65 67 6d 65 6e 74 31 20 7b 0a 73 74  1..segment1 {.st
      Sep 30 12:47:24 grant-03 kernel: 00000020: 61 72 74 5f 65 78 74 65 6e 74 20 3d 20 30 0a 65  art_extent = 0.e
      Sep 30 12:47:24 grant-03 kernel: 00000030: 78 74 65 6e 74 5f 63 6f 75 6e 74 20 3d 20 33 0a  xtent_count = 3.
      Sep 30 12:47:24 grant-03 kernel: 00000040: 0a 74 79 70 65 20 3d 20 22 73 74 72 69 70 65 64  .type = "striped
      Sep 30 12:47:24 grant-03 kernel: 00000050: 22 0a 73 74 72 69 70 65 5f 63 6f 75 6e 74 20 3d  ".stripe_count =
      Sep 30 12:47:24 grant-03 kernel: 00000060: 20 31 0a 0a 73 74 72 69 70 65 73 20 3d 20 5b 0a   1..stripes = [.
      Sep 30 12:47:24 grant-03 kernel: 00000070: 22 70 76 30 22 2c 20 35 33 34 0a 5d 0a 7d 0a 7d  "pv0", 534.].}.}
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): metadata I/O error in "xfs_btree_read_buf_block+0xa1/0x120 [xfs]" at daddr 0x30 len 8 error 74
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): Failed to recover leftover CoW staging extents, err -117.
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): Filesystem has been shut down due to log error (0x2).
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): Please unmount the filesystem and rectify the problem(s).
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): Ending recovery (logdev: internal)
      Sep 30 12:47:24 grant-03 kernel: XFS (dm-11): Error -5 reserving per-AG metadata reserve pool.
      

              okozina@redhat.com Ondrej Kozina
              cmarthal@redhat.com Corey Marthaler
              Ondrej Kozina Ondrej Kozina
              storage-qe storage-qe
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: