Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-31302

[2225116] [4.14] VMExport: can't download a PVC that was created from DV on NFS (when there's no VM that owns this PVC) - the storage doesn't support fsGroup

XMLWordPrintable

    • High
    • No

      +++ This bug was initially created as a clone of Bug #2156525 +++

      Description of the problem:
      VMExport: can't download a PVC that was created from DV on NFS (when there's no VM that owns this DV/PVC)

      Version-Release number of selected component (if applicable):
      4.12, 4.13, 4.14

      How reproducible:
      Always on NFS, when DV/PVC is not owned by a VM
      It works with NFS when VM owns the DV/PVC
      It works with HPP and OCS with no regard to a VM

      Steps to Reproduce:
      1. Create an NFS DV
      2. Create a VMExport for PVC
      3. See the VMExport is Ready
      $ oc get vmexport
      NAME SOURCEKIND SOURCENAME PHASE
      export-pvc-nfs PersistentVolumeClaim dv-source Ready

      4. Try to download the image, but get an error
      $ virtctl vmexport download export-pvc-nfs --output=disk-pvc-nfs.img --keep-vme
      Processing completed successfully
      Bad status: 500 Internal Server Error

      5. See the log:

      $ oc logs virt-export-export-pvc-nfs | grep error

      {"component":"virt-exportserver-virt-export-export-pvc-nfs","level":"error","msg":"error opening /export-volumes/dv-source/disk.img","pos":"exportserver.go:315","reason":"open /export-volumes/dv-source/disk.img: permission denied","timestamp":"2022-12-27T09:22:32.371279Z"}

      Actual results:
      Bad status: 500 Internal Server Error

      Expected results:
      Image downloaded successfully

      Additional info:

      $ cat dv-nfs.yaml

      apiVersion: cdi.kubevirt.io/v1alpha1
      kind: DataVolume
      metadata:
      name: dv-source
      spec:
      source:
      http:
      url: <cirros.img>
      storage:
      resources:
      requests:
      storage: 1Gi
      storageClassName: nfs
      volumeMode: Filesystem

      $ cat vmexport-dv-nfs.yaml

      apiVersion: export.kubevirt.io/v1alpha1
      kind: VirtualMachineExport
      metadata:
      name: export-pvc-nfs
      spec:
      source:
      apiGroup: ""
      kind: PersistentVolumeClaim
      name: dv-source

      — Additional comment from Alex Kalenyuk on 2022-12-27 10:33:23 UTC —

      This happens because NFS does not support fsGroup
      https://kubernetes.io/blog/2020/12/14/kubernetes-release-1.20-fsgroupchangepolicy-fsgrouppolicy/#allow-csi-drivers-to-declare-support-for-fsgroup-based-permissions

      [virt-exportserver@virt-export-export-pvc-nfs ~]$ id
      uid=1001(virt-exportserver) gid=1001(virt-exportserver) groups=1001(virt-exportserver),107
      [virt-exportserver@virt-export-export-pvc-nfs ~]$ ls -la /export-volumes/dv-source/disk.img
      rw-rw---. 1 107 99 1073741824 Dec 27 09:17 /export-volumes/dv-source/disk.img
      [virt-exportserver@virt-export-export-pvc-nfs ~]$ md5sum /export-volumes/dv-source/disk.img
      md5sum: /export-volumes/dv-source/disk.img: Permission denied

      I am not sure if there's something we could do here, seeing as we want to run non root (restricted PSA),
      and that requires fsGroup.

      While debugging this, I did notice that we don't set runAsGroup in the importer manifests/Dockerfile, which is something we might want to reconsider
      as it could somehow mitigate this?
      $ oc exec importer-windows-ey0bvq-installcdrom -it bash
      kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD][COMMAND] instead.
      bash-5.1$ id
      uid=107(107) gid=0(root) groups=0(root),107
      bash-5.1$ ls -la /data/disk.img
      rw-rr-. 1 107 99 5694537728 Dec 27 10:30 /data/disk.img
      @mhenriks@redhat.com WDYT?

      — Additional comment from Michael Henriksen on 2022-12-29 01:49:52 UTC —

      Export pod should run as run as user 107.

      — Additional comment from Alexander Wels on 2023-02-06 15:15:07 UTC —

      So I checked https://github.com/kubevirt/kubevirt/blob/main/cmd/virt-launcher/BUILD.bazel#L111-L118 against https://github.com/kubevirt/kubevirt/blob/main/cmd/virt-exportserver/BUILD.bazel#L32-L39 and it doesn't look like the virt-export server is running as 107, but 1001.

            akalenyu Alex Kalenyuk
            jpeimer@redhat.com Jenia Peimer
            Jenia Peimer Jenia Peimer
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: