Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-35195

[2249554] scsi3 persistent validation fails with Windows Shared Cluster

XMLWordPrintable

    • Critical
    • No

      Description of problem:
      Running the Windows failover cluster validation tool on Windows VMs with a shared ISCSI LUN based PVC fails during the scsi3 persistent reservation REGISTER AND IGNORE EXISTING test

      Version-Release number of selected component (if applicable):
      oc version
      Client Version: 4.14.0-rc.2
      Kustomize Version: v5.0.1
      Server Version: 4.14.0-rc.2
      Kubernetes Version: v1.27.6+1648878

      oc get csv -n openshift-cnv
      NAME DISPLAY VERSION REPLACES PHASE
      kubevirt-hyperconverged-operator.v4.14.0 OpenShift Virtualization 4.14.0 kubevirt-hyperconverged-operator.v4.13.3 Succeeded
      openshift-pipelines-operator-rh.v1.11.0 Red Hat OpenShift Pipelines 1.11.0 Succeeded

      How reproducible:
      100%

      Steps to Reproduce:
      1. Map an iscsi LUN from the Netapp Storage Provider to 3 worker nodes
      2. Create a PV referencing the iscsi LUN and detailing the 3 worker nodes
      3. Create a shared PVC based on the iscsi LUN with volumeMode block and accessMode ReadWriteMany
      4. Create 3 Windows12 R2 vms each with their own OS disk and referencing the 2nd shared iscsi LUN based PVC
      5. Install the virtio-win-guest-tools to update the drivers on each of the VMs
      6. Via Disk Manager access the iscsi LUN, create a partition on all available space and install NTFS
      7. Install Windows Shared Cluster software on each of the 3 Windows VMS
      8. Install and configure Active Directory/DNS Domain Controller on one of the VMs
      9. Run the Failover Cluster - Validate Configuration tool and select to run only the Storage - ISCSI Reservation Test
      10. During the scsi3 persistent reservation REGISTER AND IGNORE EXISTING the node/vm accessed crashes with the error:
      "Failure issuing call to Persistent REGISTER AND IGNORE EXISTENT on Test Disk 0 from node xxxxx.."

          • Directly connecting 3 stand alone(Non Openshift Virtualization) bare metal machines installed with Windows to the same Netapp iscsi LUN and running the same failover validation tool with the scsi3 reservation tests works fine
          • This happens on multiple clusters using a different NetApp. And that it has also been tried with a RHEL 9 system presenting the iSCSI targets.

      Actual results:
      The scsi3 persistent reservation REGISTER AND IGNORE EXISTING test fails and crashes the target Windows node

      Expected results:
      The scsi3 persistent reservation REGISTER AND IGNORE EXISTING test should pass not crashing the target Windows node

      Additional info:

      Storage class yaml:
      ------------------------------------------
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
      name: local-scsi
      provisioner: kubernetes.io/no-provisioner
      volumeBindingMode: Immediate

      PV Yaml:
      ------------------------------------------
      apiVersion: v1
      kind: PersistentVolume
      metadata:
      name: iscsi-pv-root
      spec:
      capacity:
      storage: 70Gi
      accessModes:

      • ReadWriteMany
        storageClassName: local-scsi
        iscsi:
        targetPortal: 10.9.96.31:3260
        iqn: iqn.1992-08.com.netapp:sn.438c2b596a3811e894b800a098da27d5:vs.4
        lun: 0
        volumeMode: Block
        nodeAffinity:
        required:
        nodeSelectorTerms:
      • matchExpressions:
      • key: kubernetes.io/hostname
        operator: In
        values:
      • stg03-kevin-zrzbv-worker-0-2c2q7
      • stg03-kevin-zrzbv-worker-0-9wssf
      • stg03-kevin-zrzbv-worker-0-zwwg9

      Shared PVC Yaml:
      ---------------------------------------------------------
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
      name: scsi-pvc
      spec:
      volumeMode: Block
      storageClassName: local-scsi
      accessModes:

      • ReadWriteMany
        resources:
        requests:
        storage: 70G

      VM1 Yaml:
      ----------------------------------------------------------

      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
      labels:
      kubevirt.io/vm: vm-win12-datavolume
      name: vm-win12-datavolume
      spec:
      dataVolumeTemplates:

      VM2 Yaml:
      ------------------------------------------------------------

      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
      labels:
      kubevirt.io/vm: vm-win12-datavolume-b
      name: vm-win12-datavolume-b
      spec:
      dataVolumeTemplates:

      VM3 Yaml:
      ---------------------------------------------------------

      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
      labels:
      kubevirt.io/vm: vm-win12-datavolume-c
      name: vm-win12-datavolume-c
      spec:
      dataVolumeTemplates:

      • metadata:
        creationTimestamp: null
        name: win12-dv-c
        spec:
        pvc:
        accessModes:
      • ReadWriteOnce
        resources:
        requests:
        storage: 60Gi
        storageClassName: ocs-storagecluster-ceph-rbd
        volumeMode: Block
        source:
        http:
        url: http://10.19.3.125/pub/users/joherr/os/bootsource_images/win2012r2.qcow2.gz
        running: false
        template:
        metadata:
        labels:
        kubevirt.io/vm: vm-win12-datavolume-c
        spec:
        domain:
        devices:
        disks:
      • disk:
        bus: sata
        name: datavolumedisk1-c
      • lun:
        bus: scsi
        reservation: true
        name: scsi-disk
        machine:
        type: ""
        resources:
        requests:
        memory: 7Gi
        terminationGracePeriodSeconds: 0
        volumes:
      • dataVolume:
        name: win12-dv-c
        name: datavolumedisk1-c
      • name: scsi-disk
        persistentVolumeClaim:
        claimName: scsi-pvc

              alitke@redhat.com Adam Litke
              kgoldbla Kevin Alon Goldblatt
              Kevin Alon Goldblatt Kevin Alon Goldblatt
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: