Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-54715

[4.19][vSphere]Volume is getting provisioned with wrong datastoreurl using intree provisioner.

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Won't Do
    • Icon: Normal Normal
    • None
    • 4.19.0
    • Storage
    • None
    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Low
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      Volume is getting provisioned with wrong datastoreurl using intree provisioner.

      Version-Release number of selected component (if applicable):

      4.19.0-0.nightly-2025-04-04-170728

      How reproducible:

      Always

      Steps to Reproduce:

      1. Create vsphere cluster 
      Flexy profile: ipi-on-vsphere/versioned-installer-fips    
      2. Create sc, pvc, pod with intree provisioner with wrong datastoreurl
      3. Check the pvc,pod
      
      sc_pvc_dep.yaml
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: mystorageclass-intreedatastore
      provisioner: kubernetes.io/vsphere-volume
      reclaimPolicy: Delete
      volumeBindingMode: WaitForFirstConsumer
      parameters:
        datastoreurl: non:///non/nonexist/
      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: mypvc-fs-intree
        namespace: testropatil
      spec:
        accessModes:
          - ReadWriteOnce
        volumeMode: Filesystem
        storageClassName: mystorageclass-intreedatastore
        resources:
          requests:
            storage: 1Gi
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: mydep-fs-intree
        namespace: testropatil
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: hello-storage1
        template:
          metadata:
            labels:
              app: hello-storage1
          spec:
            containers:
            - name: hello-storage
              image: quay.io/openshifttest/hello-openshift@sha256:b1aabe8c8272f750ce757b6c4263a2712796297511e0c6df79144ee188933623
              ports:
              - containerPort: 80
              volumeMounts:
              - name: local
                mountPath: /mnt/storage
            volumes:
            - name: local
              persistentVolumeClaim:
                claimName: mypvc-fs-intree

      Actual results:

      oc get pvc,pod -n testropatil  
      NAME                                          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                     VOLUMEATTRIBUTESCLASS   AGE
      persistentvolumeclaim/mypvc-fs-intree         Bound     pvc-8bf4be66-8aff-4d2b-b808-aa4210af6d5a   1Gi        RWO            mystorageclass-intreedatastore   <unset>                 50s
      
      NAME                                        READY   STATUS    RESTARTS   AGE
      pod/mydep-fs-intree-6bfd4df4fb-488dt        1/1     Running   0          51s

      Expected results:

      Should be pending as per in 4.12 cluster 

      Additional info:

      This issue is occurred from 4.13+ versions.
      
      4.12 cluster with intree provisioner and CSI provisioner
      oc get pvc,pod -n testropatil  
      NAME                                          STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS                     AGE
      persistentvolumeclaim/mypvc-fs-csidatastore   Pending                                      mystorageclass-csidatastore      55s
      persistentvolumeclaim/mypvc-fs-intree         Pending                                      mystorageclass-intreedatastore   52s
      
      NAME                                         READY   STATUS    RESTARTS   AGE
      pod/mydep-fs-csidatastore-847bb57f8c-h8f8b   0/1     Pending   0          56s
      pod/mydep-fs-intree-6bf9f68b48-lqwqz         0/1     Pending   0          53s
      
      oc describe pvc/mypvc-fs-csidatastore
      Warning  ProvisioningFailed    74s (x9 over 4m26s)  csi.vsphere.vmware.com_vmware-vsphere-csi-driver-controller-54ffb459fd-54clw_c6085d3c-cca5-4e46-b554-7d9013c0767b  failed to provision volume with StorageClass "mystorageclass-csidatastore": rpc error: code = Internal desc = failed to create volume. Error: DatastoreURL: non:///non/nonexist/ specified in the storage class is not found.
      
      oc describe pvc/mypvc-fs-intree
      Warning  ProvisioningFailed    16s (x8 over 2m3s)   persistentvolume-controller  Failed to provision volume with StorageClass "mystorageclass-intreedatastore": invalid option "datastoreurl" for volume plugin kubernetes.io/vsphere-volume
      
      
      4.19 cluster
      oc get pvc,pod -n testropatil  
      NAME                                          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                     VOLUMEATTRIBUTESCLASS   AGE
      persistentvolumeclaim/mypvc-fs-csidatastore   Pending                                                                        mystorageclass-csidatastore      <unset>                 54s
      persistentvolumeclaim/mypvc-fs-intree         Bound     pvc-8bf4be66-8aff-4d2b-b808-aa4210af6d5a   1Gi        RWO            mystorageclass-intreedatastore   <unset>                 50s
      
      NAME                                        READY   STATUS    RESTARTS   AGE
      pod/mydep-fs-csidatastore-9847c4b55-4z8vn   0/1     Pending   0          54s
      pod/mydep-fs-intree-6bfd4df4fb-488dt        1/1     Running   0          51s
      
      oc describe pvc/mypvc-fs-intree
      Normal  ProvisioningSucceeded  2m57s                  csi.vsphere.vmware.com_vmware-vsphere-csi-driver-controller-6946f5b755-ngkf9_d0f164c5-731a-4575-9c81-01b6ea52842f  Successfully provisioned volume pvc-8bf4be66-8aff-4d2b-b808-aa4210af6d5a
      
      oc describe pvc/mypvc-fs-csidatastore
      Warning  ProvisioningFailed    2m3s (x9 over 5m15s)  csi.vsphere.vmware.com_vmware-vsphere-csi-driver-controller-6946f5b755-ngkf9_d0f164c5-731a-4575-9c81-01b6ea52842f  failed to provision volume with StorageClass "mystorageclass-csidatastore": rpc error: code = Internal desc = failed to create volume. Error: Datastore: non:///non/nonexist/ specified in the storage class is not accessible to all nodes in vCenter "vcenter.devqe.ibmc.devcluster.openshift.com"
      
      sc_csi.yaml
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: mystorageclass-csidatastore
      provisioner: csi.vsphere.vmware.com
      reclaimPolicy: Delete
      volumeBindingMode: WaitForFirstConsumer
      parameters:
        datastoreurl: non:///non/nonexist/
      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: mypvc-fs-csidatastore
        namespace: testropatil
      spec:
        accessModes:
          - ReadWriteOnce
        volumeMode: Filesystem
        storageClassName: mystorageclass-csidatastore
        resources:
          requests:
            storage: 1Gi
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: mydep-fs-csidatastore
        namespace: testropatil
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: hello-storage1
        template:
          metadata:
            labels:
              app: hello-storage1
          spec:
            containers:
            - name: hello-storage
              image: quay.io/openshifttest/hello-openshift@sha256:b1aabe8c8272f750ce757b6c4263a2712796297511e0c6df79144ee188933623
              ports:
              - containerPort: 80
              volumeMounts:
              - name: local
                mountPath: /mnt/storage
            volumes:
            - name: local
              persistentVolumeClaim:
                claimName: mypvc-fs-csidatastore

              Unassigned Unassigned
              ropatil@redhat.com Rohit Patil
              None
              None
              Wei Duan Wei Duan
              None
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: