-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.19
-
None
-
None
-
False
-
-
None
-
Important
-
None
-
Unspecified
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
I'm from the Oracle OpenShift Engr. team and we have a customer issue where two of the nodes on the OpenShift cluster losing partition table during the reboot of the nodes. I have logs shared for one of the nodes to this message. The Oracle support team has helped the customer on resolving the issue(Still trying to get the details on how it was resolved).
Logs from OCI compute team doing rebootMigrateBoth instances failed with this message:
Boot0001 "UEFI ORACLE BlockVolume " from PciRoot(0x0)/Pci(0xD,0x0)/Pci(0x0,0x0)/Scsi(0x1,0x1): Not Found
Failed to Load Boot0002 "UEFI ORACLE BlockVolume 2" from PciRoot(0x0)/Pci(0xE,0x0)/Pci(0x0,0x0)/Scsi(0x1,0x1): Not Found
Loading Boot0003 "EFI Internal Shell" from Fv(7CB8BDC9-F8EB-4F34-AAEA-3EE4AF6516A1)/FvFile(7C04A583-9E3E-4F1C-AD65-E05268D0B4D1)
Image Loaded: Shell.efi
Starting Boot0003 "EFI Internal Shell" from Fv(7CB8BDC9-F8EB-4F34-AAEA-3EE4AF6516A1)/FvFile(7C04A583-9E3E-4F1C-AD65-E05268D0B4D1)
UEFI Interactive Shell v2.2
EDK II
UEFI v2.70 (EDK II, 0x00010000)
Mapping table
BLK0: Alias(s):
PciRoot(0x0)/Pci(0xD,0x0)/Pci(0x0,0x0)/Scsi(0x1,0x1)
BLK1: Alias(s):
PciRoot(0x0)/Pci(0xE,0x0)/Pci(0x0,0x0)/Scsi(0x1,0x1)
Press ESC in 5 seconds to skip startup.nsh or any other key to continue.Press
ESC in 4 seconds to skip startup.nsh or any other key to continue.Press ESC in 3 seconds
to skip startup.nsh or any other key to continue.Press ESC in 2 seconds to skip
startup.nsh or any other key to continue.Press ESC in 1 seconds to skip startup.nsh or
any other key to continue.
Shell>
Shell>
Shell>
Shell>
Version-Release number of selected component (if applicable):
The customer hasnt shared that yet. I will get the details.
How reproducible:
The issue was observed on two nodes only after a reboot. Prior to the reboot, both nodes operated normally. Rebooting the affected nodes appears to reliably trigger the problem.
Steps to Reproduce:
N/A
Actual results:
The instances did not boot.
Expected results:
Reboot should not cause the Instances to go down.
Additional info:
N/A