Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-48051

[vSphere-CSI-Driver] [multi-vcenter] pre-provisioning volumes attach failed due to: failed to get vCenter for the volumeID

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • 4.18, 4.19
    • Storage / Operators
    • None
    • Important
    • None
    • Rejected
    • False
    • Hide

      None

      Show
      None

      Description of problem:

          In multi-vcenter test env, the pre-provisioning volumes attach failed due to: failed to get vCenter for the volumeID 
      
      
        Warning  FailedAttachVolume  3s (x5 over 13s)  attachdetach-controller  AttachVolume.Attach failed for volume "newpv-wfn79faf" : rpc error: code = Internal desc = failed to get volume manager for volume Id: "63bbeb00-f6c5-47bf-87df-bf399e31cbda". Error: rpc error: code = Internal desc = failed to get vCenter for the volumeID: "63bbeb00-f6c5-47bf-87df-bf399e31cbda" with err=Could not find vCenter for VolumeID: "63bbeb00-f6c5-47bf-87df-bf399e31cbda"
      
      
      The log from CSI Driver:
      
      {"level":"error","time":"2025-01-07T07:38:45.644611239Z","caller":"cnsvolumeinfo/cnsvolumeinfoservice.go:156","msg":"Could not find vCenter for VolumeID: \"63bbeb00-f6c5-47bf-87df-bf399e31cbda\"","TraceId":"57a393b3-a963-45d3-97e5-f0fcc395bfeb","stacktrace":"sigs.k8s.io/vsphere-csi-driver/v3/pkg/internalapis/cnsvolumeinfo.(*volumeInfo).GetvCenterForVolumeID\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/internalapis/cnsvolumeinfo/cnsvolumeinfoservice.go:156\nsigs.k8s.io/vsphere-csi-driver/v3/pkg/csi/service/vanilla.getVCenterAndVolumeManagerForVolumeID\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/csi/service/vanilla/controller_helper.go:241\nsigs.k8s.io/vsphere-csi-driver/v3/pkg/csi/service/vanilla.(*controller).ControllerPublishVolume.func1\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/csi/service/vanilla/controller.go:2209\nsigs.k8s.io/vsphere-csi-driver/v3/pkg/csi/service/vanilla.(*controller).ControllerPublishVolume\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/csi/service/vanilla/controller.go:2312\ngithub.com/container-storage-interface/spec/lib/go/csi._Controller_ControllerPublishVolume_Handler\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/github.com/container-storage-interface/spec/lib/go/csi/csi.pb.go:6616\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/google.golang.org/grpc/server.go:1372\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/google.golang.org/grpc/server.go:1783\ngoogle.golang.org/grpc.(*Server).serveStreams.func2.1\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/google.golang.org/grpc/server.go:1016"}
      {"level":"error","time":"2025-01-07T07:38:45.644657517Z","caller":"vanilla/controller_helper.go:243","msg":"failed to get vCenter for the volumeID: \"63bbeb00-f6c5-47bf-87df-bf399e31cbda\" with err=Could not find vCenter for VolumeID: \"63bbeb00-f6c5-47bf-87df-bf399e31cbda\"","TraceId":"57a393b3-a963-45d3-97e5-f0fcc395bfeb","stacktrace":"sigs.k8s.io/vsphere-csi-driver/v3/pkg/csi/service/vanilla.getVCenterAndVolumeManagerForVolumeID\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/csi/service/vanilla/controller_helper.go:243\nsigs.k8s.io/vsphere-csi-driver/v3/pkg/csi/service/vanilla.(*controller).ControllerPublishVolume.func1\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/csi/service/vanilla/controller.go:2209\nsigs.k8s.io/vsphere-csi-driver/v3/pkg/csi/service/vanilla.(*controller).ControllerPublishVolume\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/csi/service/vanilla/controller.go:2312\ngithub.com/container-storage-interface/spec/lib/go/csi._Controller_ControllerPublishVolume_Handler\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/github.com/container-storage-interface/spec/lib/go/csi/csi.pb.go:6616\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/google.golang.org/grpc/server.go:1372\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/google.golang.org/grpc/server.go:1783\ngoogle.golang.org/grpc.(*Server).serveStreams.func2.1\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/google.golang.org/grpc/server.go:1016"}
      {"level":"error","time":"2025-01-07T07:38:45.644708445Z","caller":"vanilla/controller.go:2211","msg":"failed to get volume manager for volume Id: \"63bbeb00-f6c5-47bf-87df-bf399e31cbda\". Error: rpc error: code = Internal desc = failed to get vCenter for the volumeID: \"63bbeb00-f6c5-47bf-87df-bf399e31cbda\" with err=Could not find vCenter for VolumeID: \"63bbeb00-f6c5-47bf-87df-bf399e31cbda\"","TraceId":"57a393b3-a963-45d3-97e5-f0fcc395bfeb","stacktrace":"sigs.k8s.io/vsphere-csi-driver/v3/pkg/csi/service/vanilla.(*controller).ControllerPublishVolume.func1\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/csi/service/vanilla/controller.go:2211\nsigs.k8s.io/vsphere-csi-driver/v3/pkg/csi/service/vanilla.(*controller).ControllerPublishVolume\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/csi/service/vanilla/controller.go:2312\ngithub.com/container-storage-interface/spec/lib/go/csi._Controller_ControllerPublishVolume_Handler\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/github.com/container-storage-interface/spec/lib/go/csi/csi.pb.go:6616\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/google.golang.org/grpc/server.go:1372\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/google.golang.org/grpc/server.go:1783\ngoogle.golang.org/grpc.(*Server).serveStreams.func2.1\n\t/go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/google.golang.org/grpc/server.go:1016"}

      Version-Release number of selected component (if applicable):

          4.19.0-0.nightly-2024-12-21-202735

      How reproducible:

          Always

      Steps to Reproduce:

          1. Create vSphere cluster across the multi vcenters
          2. Creats sc with Retain to get a volume in the back end      
          3. Create a new pv and pvc
          4. Create a pod to consume the pvc
          5. Check pod status 
      
      
      The new pv looks like:
       
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        annotations:
          pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
          volume.kubernetes.io/provisioner-deletion-secret-name: ''
          volume.kubernetes.io/provisioner-deletion-secret-namespace: ''
        creationTimestamp: '2025-01-07T07:35:51Z'
        finalizers:
        - kubernetes.io/pv-protection
        - external-attacher/csi-vsphere-vmware-com
        name: newpv-wfn79faf
        resourceVersion: '8354041'
        uid: f7c3d68c-b3a5-45af-b080-ae438371f738
      spec:
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 2Gi
        csi:
          driver: csi.vsphere.vmware.com
          fsType: ext4
          volumeAttributes:
            type: vSphere CNS Block Volume
          volumeHandle: 63bbeb00-f6c5-47bf-87df-bf399e31cbda
        nodeAffinity:
          required:
            nodeSelectorTerms:
            - matchExpressions:
              - key: topology.csi.vmware.com/openshift-zone
                operator: In
                values:
                - us-east-1a
              - key: topology.csi.vmware.com/openshift-region
                operator: In
                values:
                - us-east-1
        persistentVolumeReclaimPolicy: Delete
        volumeMode: Filesystem
        storageClassName: manual-sc-44907
           

      Actual results:

      The pod is not running with FailedAttachVolume errot
        Warning  FailedAttachVolume  3s (x5 over 13s)  attachdetach-controller  AttachVolume.Attach failed for volume "newpv-wfn79faf" : rpc error: code = Internal desc = failed to get volume manager for volume Id: "63bbeb00-f6c5-47bf-87df-bf399e31cbda". Error: rpc error: code = Internal desc = failed to get vCenter for the volumeID: "63bbeb00-f6c5-47bf-87df-bf399e31cbda" with err=Could not find vCenter for VolumeID: "63bbeb00-f6c5-47bf-87df-bf399e31cbda"

      Expected results:

      The pod should be running.

      Additional info:

          

              hekumar@redhat.com Hemant Kumar
              wduan@redhat.com Wei Duan
              Wei Duan Wei Duan
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: