-
Bug
-
Resolution: Done-Errata
-
Major
-
None
-
Quality / Stability / Reliability
-
False
-
-
False
-
CLOSED
-
Storage Core Sprint 225, Storage Core Sprint 226, Storage Core Sprint 227
-
Moderate
-
None
Description of problem:
virtctl image-upload hangs waiting for pod to be ready when using a storage class with no default access mode defined in the storage profile
Version-Release number of selected component (if applicable):
virtctl version
Client Version: version.Info
Server Version: version.Info
{GitVersion:"v0.56.0-rc.0-248-g30d8ed316", GitCommit:"30d8ed316ca093ee83fc632449f058b2f690696b", GitTreeState:"clean", BuildDate:"2022-08-31T19:44:16Z", GoVersion:"go1.18.4", Compiler:"gc", Platform:"linux/amd64"}oc version
ocClient Version: 4.12.0-ec.1
Kustomize Version: v4.5.4
Server Version: 4.12.0-ec.1
Kubernetes Version: v1.24.0+a9d6306
oc get csv -n openshift-cnv
NAME DISPLAY VERSION REPLACES PHASE
kubevirt-hyperconverged-operator.v4.12.0 OpenShift Virtualization 4.12.0 kubevirt-hyperconverged-operator.v4.10.5 Succeeded
volsync-product.v0.5.0 VolSync 0.5.0 Succeeded
How reproducible:
100%
Steps to Reproduce:
1.virtctl image-upload dv my-data-volume6 --size=2Gi --storage-class=nfs --image-path=./cirros-0.4.0-x86_64-disk.img
PVC default/my-data-volume6 not found
DataVolume default/my-data-volume6 created
Waiting for PVC my-data-volume6 upload pod to be ready... >>>>>> Hangs till timeout
Actual results:
PVC default/my-data-volume7 not found
DataVolume default/my-data-volume7 created
Waiting for PVC my-data-volume7 upload pod to be ready...
Expected results:
Fail gracefully with an error indicating the accessMode could not be auto detected
Additional info:
- external trackers
- links to