Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-38692

LVMS on SNO: can't import a DV with contentType: archive

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Undefined Undefined
    • None
    • CNV v4.15.0
    • CNV Storage
    • None
    • 3
    • False
    • Hide

      None

      Show
      None
    • False
    • ---
    • ---
    • Storage Core Sprint 250
    • No

      Description of problem:

      DV with 'contentType: archive' stays Pending
      LVMS - [rwo / Block], [rwo / Filesystem]

      Version-Release number of selected component (if applicable):

      4.15 (OCP, CNV, and ODF upgraded from 4.14), SNO cluster

      How reproducible:

      Always

      Steps to Reproduce:

      1. Create a DV  
      
      apiVersion: cdi.kubevirt.io/v1beta1
      kind: DataVolume
      metadata:
        name: cnv-2145-lvms-vg1
        namespace: cdi-import-test-import-htt
      spec:
        contentType: archive
        source:
          http:
            url: http://internal-http.cnv-tests-utilities/archive.tar
        storage:
          resources:
            requests:
              storage: 1Gi
          storageClassName: lvms-vg1
      
      
      (Also, when it's contentType: archive, but image is raw / raw.gz / img / other non-archive type - result is the same - failed to provision volume)
      
      apiVersion: cdi.kubevirt.io/v1beta1
      kind: DataVolume
      metadata:
        name: dv-lvms-arc
        annotations:
          cdi.kubevirt.io/storage.bind.immediate.requested: 'true'
      spec:
        contentType: archive
        source:
          http:
            url: http://....../cirros-images/cirros-0.4.0-x86_64-disk.raw.gz
        storage:
          resources:
            requests:
              storage: 200Mi
          storageClassName: lvms-vg1 

      Actual results:

      1. PVC is Pending
      
      $ oc get pvc
      NAME                                                 STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
      dv-lvms-arc                                          Pending                                      lvms-vg1       27m
      prime-2bd34e06-b728-4655-9937-9038d1a4406d           Pending                                      lvms-vg1       27m
      prime-2bd34e06-b728-4655-9937-9038d1a4406d-scratch   Pending                                      lvms-vg1       27m
      

       

      $ oc describe pvc dv-lvms-arc | grep Events -A 10
      Events:
        Type    Reason                       Age                    From                         Message
        ----    ------                       ----                   ----                         -------
        Normal  CreatedPVCPrimeSuccessfully  28m                    import-populator             PVC Prime created successfully
        Normal  WaitForFirstConsumer         3m15s (x106 over 28m)  persistentvolume-controller  waiting for first consumer to be created before binding
      $ oc describe pvc prime-2bd34e06-b728-4655-9937-9038d1a4406d | grep Events -A 10
      Events:
        Type     Reason                Age                    From                                                                                 Message
        ----     ------                ----                   ----                                                                                 -------
        Normal   WaitForFirstConsumer  28m (x2 over 28m)      persistentvolume-controller                                                          waiting for first consumer to be created before binding
        Normal   WaitForPodScheduled   28m                    persistentvolume-controller                                                          waiting for pod importer-prime-2bd34e06-b728-4655-9937-9038d1a4406d to be scheduled
        Warning  ProvisioningFailed    4m59s (x14 over 28m)   topolvm.io_topolvm-controller-765c99856c-cvltw_2a3169ec-1595-4946-bdf1-0d9f1a3121c0  failed to provision volume with StorageClass "lvms-vg1": rpc error: code = Internal desc = exit status 3
        Normal   Provisioning          4m55s (x15 over 28m)   topolvm.io_topolvm-controller-765c99856c-cvltw_2a3169ec-1595-4946-bdf1-0d9f1a3121c0  External provisioner is provisioning volume for claim "default/prime-2bd34e06-b728-4655-9937-9038d1a4406d"
        Normal   ExternalProvisioning  3m28s (x105 over 28m)  persistentvolume-controller                                                          Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
      $ oc describe pvc prime-2bd34e06-b728-4655-9937-9038d1a4406d-scratch | grep Events -A 10
      Events:
        Type     Reason                Age                    From                                                                                 Message
        ----     ------                ----                   ----                                                                                 -------
        Normal   WaitForPodScheduled   28m                    persistentvolume-controller                                                          waiting for pod importer-prime-2bd34e06-b728-4655-9937-9038d1a4406d to be scheduled
        Warning  ProvisioningFailed    5m17s (x14 over 28m)   topolvm.io_topolvm-controller-765c99856c-cvltw_2a3169ec-1595-4946-bdf1-0d9f1a3121c0  failed to provision volume with StorageClass "lvms-vg1": rpc error: code = Internal desc = exit status 3
        Normal   ExternalProvisioning  3m46s (x104 over 28m)  persistentvolume-controller                                                          Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
        Normal   Provisioning          12s (x16 over 28m)     topolvm.io_topolvm-controller-765c99856c-cvltw_2a3169ec-1595-4946-bdf1-0d9f1a3121c0  External provisioner is provisioning volume for claim "default/prime-2bd34e06-b728-4655-9937-9038d1a4406d-scratch"

       

      PVC prime yaml:

      $ oc get pvc prime-2bd34e06-b728-4655-9937-9038d1a4406d -oyaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        annotations:
          cdi.kubevirt.io/storage.bind.immediate.requested: ""
          cdi.kubevirt.io/storage.condition.bound: "false"
          cdi.kubevirt.io/storage.condition.bound.message: Claim Pending
          cdi.kubevirt.io/storage.condition.bound.reason: Claim Pending
          cdi.kubevirt.io/storage.contentType: archive
          cdi.kubevirt.io/storage.import.endpoint: http://.......<server>......../cirros-images/cirros-0.4.0-x86_64-disk.raw.gz
          cdi.kubevirt.io/storage.import.importPodName: importer-prime-2bd34e06-b728-4655-9937-9038d1a4406d
          cdi.kubevirt.io/storage.import.requiresScratch: "false"
          cdi.kubevirt.io/storage.import.source: http
          cdi.kubevirt.io/storage.pod.phase: Pending
          cdi.kubevirt.io/storage.populator.kind: VolumeImportSource
          cdi.kubevirt.io/storage.preallocation.requested: "false"
          sidecar.istio.io/inject: "false"
          volume.beta.kubernetes.io/storage-provisioner: topolvm.io
          volume.kubernetes.io/selected-node: cnv-qe-infra-28.cnvqe2.lab.eng.rdu2.redhat.com
          volume.kubernetes.io/storage-provisioner: topolvm.io
        creationTimestamp: "2024-02-25T15:07:48Z"
        finalizers:
        - kubernetes.io/pvc-protection
        labels:
          app: containerized-data-importer
          app.kubernetes.io/component: storage
          app.kubernetes.io/managed-by: cdi-controller
          app.kubernetes.io/part-of: hyperconverged-cluster
          app.kubernetes.io/version: 4.15.0
        name: prime-2bd34e06-b728-4655-9937-9038d1a4406d
        namespace: default
        ownerReferences:
        - apiVersion: v1
          blockOwnerDeletion: true
          controller: true
          kind: PersistentVolumeClaim
          name: dv-lvms-arc
          uid: 2bd34e06-b728-4655-9937-9038d1a4406d
        resourceVersion: "2905305"
        uid: 25ca4795-3d18-44e4-a775-e1a3f6865960
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: "221920847"
        storageClassName: lvms-vg1
        volumeMode: Filesystem
      status:
        phase: Pending
       

      Importer pod:

       

      apiVersion: v1
      kind: Pod
      metadata:
        annotations:
          cdi.kubevirt.io/storage.createdByController: "yes"
          sidecar.istio.io/inject: "false"
        creationTimestamp: "2024-02-25T15:07:48Z"
        labels:
          app: containerized-data-importer
          app.kubernetes.io/component: storage
          app.kubernetes.io/managed-by: cdi-controller
          app.kubernetes.io/part-of: hyperconverged-cluster
          app.kubernetes.io/version: 4.15.0
          cdi.kubevirt.io: importer
          prometheus.cdi.kubevirt.io: "true"
        name: importer-prime-2bd34e06-b728-4655-9937-9038d1a4406d
        namespace: default
        ownerReferences:
        - apiVersion: v1
          blockOwnerDeletion: true
          controller: true
          kind: PersistentVolumeClaim
          name: prime-2bd34e06-b728-4655-9937-9038d1a4406d
          uid: 25ca4795-3d18-44e4-a775-e1a3f6865960
        resourceVersion: "2911180"
        uid: 5e68192c-d20e-4a8e-af5a-b239622d87f3
      spec:
        containers:
        - args:
          - -v=1
          env:
          - name: IMPORTER_SOURCE
            value: http
          - name: IMPORTER_ENDPOINT
            value: http://...........................<server>......../cirros-0.4.0-x86_64-disk.raw.gz
          - name: IMPORTER_CONTENTTYPE
            value: archive
          - name: IMPORTER_IMAGE_SIZE
            value: "221920847"
          - name: OWNER_UID
            value: 2bd34e06-b728-4655-9937-9038d1a4406d
          - name: FILESYSTEM_OVERHEAD
            value: "0.055"
          - name: INSECURE_TLS
            value: "false"
          - name: IMPORTER_DISK_ID
          - name: IMPORTER_UUID
          - name: IMPORTER_PULL_METHOD
          - name: IMPORTER_READY_FILE
          - name: IMPORTER_DONE_FILE
          - name: IMPORTER_BACKING_FILE
          - name: IMPORTER_THUMBPRINT
          - name: http_proxy
          - name: https_proxy
          - name: no_proxy
          - name: IMPORTER_CURRENT_CHECKPOINT
          - name: IMPORTER_PREVIOUS_CHECKPOINT
          - name: IMPORTER_FINAL_CHECKPOINT
          - name: PREALLOCATION
            value: "false"
          image: registry.redhat.io/container-native-virtualization/virt-cdi-importer-rhel9@sha256:12b7ecd91326344aaf674ddfa7c3a6bda29b83c8240ee67b06603fe532e51a01
          imagePullPolicy: IfNotPresent
          name: importer
          ports:
          - containerPort: 8443
            name: metrics
            protocol: TCP
          resources:
            limits:
              cpu: 750m
              memory: 600M
            requests:
              cpu: 100m
              memory: 60M
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            runAsNonRoot: true
            runAsUser: 107
            seccompProfile:
              type: RuntimeDefault
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /data
            name: cdi-data-vol
          - mountPath: /scratch
            name: cdi-scratch-vol
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: kube-api-access-lhczz
            readOnly: true
        dnsPolicy: ClusterFirst
        enableServiceLinks: true
        preemptionPolicy: PreemptLowerPriority
        priority: 0
        restartPolicy: OnFailure
        schedulerName: default-scheduler
        securityContext:
          fsGroup: 107
        serviceAccount: default
        serviceAccountName: default
        terminationGracePeriodSeconds: 30
        tolerations:
        - effect: NoExecute
          key: node.kubernetes.io/not-ready
          operator: Exists
          tolerationSeconds: 300
        - effect: NoExecute
          key: node.kubernetes.io/unreachable
          operator: Exists
          tolerationSeconds: 300
        - effect: NoSchedule
          key: node.kubernetes.io/memory-pressure
          operator: Exists
        volumes:
        - name: cdi-data-vol
          persistentVolumeClaim:
            claimName: prime-2bd34e06-b728-4655-9937-9038d1a4406d
        - name: cdi-scratch-vol
          persistentVolumeClaim:
            claimName: prime-2bd34e06-b728-4655-9937-9038d1a4406d-scratch
        - name: kube-api-access-lhczz
          projected:
            defaultMode: 420
            sources:
            - serviceAccountToken:
                expirationSeconds: 3607
                path: token
            - configMap:
                items:
                - key: ca.crt
                  path: ca.crt
                name: kube-root-ca.crt
            - downwardAPI:
                items:
                - fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
                  path: namespace
            - configMap:
                items:
                - key: service-ca.crt
                  path: service-ca.crt
                name: openshift-service-ca.crt
      status:
        conditions:
        - lastProbeTime: null
          lastTransitionTime: "2024-02-25T15:07:48Z"
          message: 'running PreBind plugin "VolumeBinding": binding volumes: timed out waiting
            for the condition'
          reason: SchedulerError
          status: "False"
          type: PodScheduled
        phase: Pending
        qosClass: Burstable
       

       

      Expected results:

      If contant type is archive and image is archive:
           DV imported successfully
      If contant type is archive, but image is non-archive, we expect the error message:
           Unable to process data: Unable to transfer source data to target directory: unable to untar files from endpoint: exit status 2

      Additional info:

       

            akalenyu Alex Kalenyuk
            jpeimer@redhat.com Jenia Peimer
            Jenia Peimer Jenia Peimer
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: