Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-33543

root disk partitions getting deleted

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Cannot Reproduce
    • Icon: Major Major
    • None
    • 4.14.0
    • None
    • Important
    • No
    • False
    • Hide

      None

      Show
      None

      The node will be working as expected then during a reboot the node will not come back up.

      The console for the vm will indicate
      ~~~
      Redhat ... no media
      EFI virtual disk (0.0) ... no media
      EFI network 1 .
      ~~~

      Booting the vm from an iso and then running lsblk will show that sda does not have any partitions, same when checking with fdisk.

      The node role is infra and labeled for storage
      ~~~
      node-role.kubernetes.io/infra: ""
      cluster.ocs.openshift.io/openshift-storage: ""
      ~~~

      The node has 2 disks, sda and sdb
      ~~~
      sda 8:0 0 120G 0 disk
      sdb 8:16 0 1.7T 0 disk
      ~~~

      the pv annotation indicates sda for that node, sda is the root disk.

      oc get pv local-pv-b0acdaaa -oyaml
      ~~~
      apiVersion: v1
      kind: PersistentVolume
      metadata:
      annotations:
      pv.kubernetes.io/bound-by-controller: "yes"
      pv.kubernetes.io/provisioned-by: local-volume-provisioner-ocs1.rch3cdtnsrsocp.vzbi.com
      storage.openshift.com/device-id: wwn-0x600508b1001c09040a5b34781cd00445
      storage.openshift.com/device-name: sda
      ~~~

      When the cluster is at 4.12 the issue does not happen, only when the cluster is at 4.14.10

              hekumar@redhat.com Hemant Kumar
              rhn-support-dseals Daniel Seals
              Chao Yang Chao Yang
              Daniel Seals
              Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

                Created:
                Updated:
                Resolved: