Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-2054

zipl-switch-to-blscfg dies with "entry already exists" when having more than one "rescue" entry

Linking RHIVOS CVEs to...Migration: Automation ...Sync from "Extern...XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Can't Do
    • Icon: Major Major
    • None
    • rhel-8.4.0
    • s390utils
    • None
    • Important
    • rhel-arch-hw
    • ssg_platform_enablement
    • None
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • If docs needed, set a value
    • None
    • 57,005

      Description of problem:

      When having more than one "rescue" entry in the /etc/zipl.conf file, the utility dies after printing "BLS file /boot/loader/entries/<MACHINEID>-0-rescue.conf already exists".

      This is due to having the file name "<MACHINEID>-0-rescue.conf" hardcoded, as soon as one "rescue" entry exists.

      Typically the tool will die if a system has been cloned and there exists already another "rescue" entry for a different machine id.

      Version-Release number of selected component (if applicable):

      s390utils-base-2.15.1-5.el8.s390x

      How reproducible:

      Always

      Steps to Reproduce:
      1. Create a /etc/zipl.conf from a RHEL7 which contains 2 rescue entries

      -------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
      [defaultboot]
      defaultauto
      prompt=1
      timeout=5
      default=Red_Hat_Enterprise_Linux_Server_7.9_Rescue_7fef08a17f6a400db03b693a0ef30ba0
      target=/boot
      [Red_Hat_Enterprise_Linux_Server_7.9_Rescue_7fef08a17f6a400db03b693a0ef30ba0]
      image=/boot/vmlinuz-0-rescue-7fef08a17f6a400db03b693a0ef30ba0
      parameters="root=/dev/mapper/rootvg-root vmalloc=4096G user_mode=home console=ttyS0 crashkernel=auto rd.lvm.lv=rootvg/root LANG=en_US.UTF-8 ipv6.disable=1 transparent_hugepage=never vmhalt=LOGOFF vmpoff=LOGOFF"
      ramdisk=/boot/initramfs-0-rescue-7fef08a17f6a400db03b693a0ef30ba0.img
      [3.10.0-1160.25.1.el7.s390x]
      image=/boot/vmlinuz-3.10.0-1160.25.1.el7.s390x
      parameters="root=/dev/mapper/rootvg-root vmalloc=4096G user_mode=home console=ttyS0 crashkernel=auto rd.lvm.lv=rootvg/root LANG=en_US.UTF-8 ipv6.disable=1 transparent_hugepage=never vmhalt=LOGOFF vmpoff=LOGOFF"
      ramdisk=/boot/initramfs-3.10.0-1160.25.1.el7.s390x.img
      [linux-0-rescue-fbf2f10617024e97989bccd4d299ec21]
      image=/boot/vmlinuz-0-rescue-fbf2f10617024e97989bccd4d299ec21
      ramdisk=/boot/initramfs-0-rescue-fbf2f10617024e97989bccd4d299ec21.img
      parameters="root=/dev/mapper/rootvg-root vmalloc=4096G user_mode=home console=ttyS0 crashkernel=auto rd.lvm.lv=rootvg/root ipv6.disable=1 transparent_hugepage=never vmhalt=LOGOFF vmpoff=LOGOFF"
      -------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

      Adjust machine id "7fef08a17f6a400db03b693a0ef30ba0" to the system's one ideally.

      2. Execute zipl-switch-to-blscfg after deleting /boot/loader/entries directory

      1. rm -fr /boot/loader/entries
      2. zipl-switch-to-blscfg

      Actual results:

      BLS file /boot/loader/entries/7fef08a17f6a400db03b693a0ef30ba0-0-rescue.conf already exists

      Expected results:

      A Warning that the entry couldn't be created and machine-id differs somehow.

      Additional info:

      The code responsible to create the file is shown below:
      -------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
      168 if [ -n "${zipl_to_bls[$key]}" ]; then
      169 if [[ $key = "image" && $version_name == true ]]; then
      170 if [[ $val = "vmlinuz-" ]]; then
      171 version="${val##*/vmlinuz-}"
      172 else
      173 version="${val##*/}"
      174 fi
      175 echo "version $version" >> ${OUTPUT}
      176 if [[ $version = "rescue" ]]; then
      177 FILENAME=${BLS_DIR}/${MACHINE_ID}0-rescue.conf
      :
      -------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

      On line 177 we can see that FILENAME is dependent on machine id, but not actually machine id found in rescue entry extracted from kernel image, here above "/boot/vmlinuz-0-rescue-fbf2f10617024e97989bccd4d299ec21".

              rhn-support-dhorak Daniel Horak
              rhn-support-rmetrich Renaud Métrich
              Daniel Horak Daniel Horak
              Vilem Marsik Vilem Marsik
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

                Created:
                Updated:
                Resolved: