Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-12262

[1967086] Cloning DataVolumes between namespaces fails while creating cdi-upload pod

XMLWordPrintable

    • Storage Core Sprint 203, Storage Core Sprint 204, Storage Core Sprint 205
    • High

      Description of problem:

      While cloning data volume between namespaces, the cloning is scheduled but never starts.

      $ oc get dvs
      NAME PHASE PROGRESS RESTARTS AGE
      dv-tests-cloning-001 CloneScheduled N/A 30s

      The PVC status is "bound".

      $ oc get pvc
      NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
      dv-tests-cloning-001 Bound pvc-326a53af-b8f7-4328-a30f-76ec1d21ee21 12Gi RWO ocs-external-storagecluster-ceph-rbd 33s

      But there is no cdi-upload pod.

      The cdi-deployment logs have got error "Pod \"cdi-upload-dv-tests-cloning-001\" is invalid: spec.containers[0].resources.requests: Invalid value: \"1m\": must be less than or equal to cpu limit".

      ===

      {"level":"error","ts":1622451245.2904956,"logger":"controller","msg":"Reconciler error","controller":"upload-controller","name":"dv-tests-cloning-001","namespace":"tests-cloning","error":"Pod \"cdi-upload-dv-tests-cloning-001\" is invalid: spec.containers[0].resources.requests: Invalid value: \"1m\": must be less than or equal to cpu limit","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:237\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"}

      ===

      As per my understanding, the pod.Spec.Containers[0].Resources is obtained from the defaultPodResourceRequirements and it has got default values.

      ==
      $ oc get cdiconfig -o yaml
      apiVersion: v1
      items:

      • apiVersion: cdi.kubevirt.io/v1beta1
        kind: CDIConfig
        status:
        defaultPodResourceRequirements:
        limits:
        cpu: "0"
        memory: "0"
        requests:
        cpu: "0"
        memory: "0"
        ===

      I cannot find a way to see the spec send by the cdi controller to create the pod but it looks like it's sending requests more than that of limits? However, that doesn't make sense since cdiconfig has got default values.

      There are no quotas and limits for the namespace.

      The permission is also mapped correctly.

      Version-Release number of selected component (if applicable):

      2.5.5

      How reproducible:

      Observed in a customer environment and not reproduced locally.

      Steps to Reproduce:

      1. Issue is observed when cloning dv between namespaces.

      Actual results:

      Cloning DataVolumes between namespaces fails while creating cdi-upload pod.

      Expected results:

      cloning should work

      Additional info:

            rhn-support-awels Alexander Wels
            rhn-support-nashok Nijin Ashok
            Kevin Alon Goldblatt Kevin Alon Goldblatt
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: