Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-38922

Azure-file mount permission denied with private storage account created by internal image registry

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • 4.17, 4.18
    • Storage / Operators
    • None
    • None
    • Rejected
    • False
    • Hide

      None

      Show
      None
    • Hide
      Azure File Driver will no longer attempt to reuse existing storage accounts but will attempt to create its own during dynamic provisioning. For upgraded clusters it means that newly created Persistent Volumes will use a new storage account while already provisioned Persistent Volumes will continue using the existing one as they would prior to the upgrade.
      Show
      Azure File Driver will no longer attempt to reuse existing storage accounts but will attempt to create its own during dynamic provisioning. For upgraded clusters it means that newly created Persistent Volumes will use a new storage account while already provisioned Persistent Volumes will continue using the existing one as they would prior to the upgrade.
    • Bug Fix
    • In Progress

      Description of problem:

      With the Configuring a private storage endpoint on Azure by enabling the Image Registry Operator to discover VNet and subnet names[1], if creating cluster with internal Image Registry, it will create a storage account with private endpoint, so once the new pvc using the same skuName with this private storage account, it will hit the mount permission issue. 
       
      
      [1] https://docs.openshift.com/container-platform/4.16/post_installation_configuration/configuring-private-cluster.html#configuring-private-storage-endpoint-azure-vnet-subnet-iro-discovery_configuring-private-cluster

      Version-Release number of selected component (if applicable):

      4.17

      How reproducible:

      Always

      Steps to Reproduce:

      Creating cluster with flexy job: aos-4_17/ipi-on-azure/versioned-installer-customer_vpc-disconnected-fully_private_cluster-arm profile and specify enable_internal_image_registry: "yes"
      Create pod and pvc with azurefile-csi sc     

      Actual results:

      pod failed to up due to mount error:
      
      mount //imageregistryciophgfsnrc.file.core.windows.net/pvc-facecce9-d4b5-4297-b253-9a6200642392 on /var/lib/kubelet/plugins/kubernetes.io/csi/file.csi.azure.com/b4b5e52fb1d21057c9644d0737723e8911d9519ec4c8ddcfcd683da71312a757/globalmount failed with mount failed: exit status 32
        Mounting command: mount
        Mounting arguments: -t cifs -o mfsymlinks,cache=strict,nosharesock,actimeo=30,gid=1018570000,file_mode=0777,dir_mode=0777, //imageregistryciophgfsnrc.file.core.windows.net/pvc-facecce9-d4b5-4297-b253-9a6200642392 /var/lib/kubelet/plugins/kubernetes.io/csi/file.csi.azure.com/b4b5e52fb1d21057c9644d0737723e8911d9519ec4c8ddcfcd683da71312a757/globalmount
        Output: mount error(13): Permission denied 

      Expected results:

      Pod should be up

      Additional info:

      We can have some simple WA like using storageclass with networkEndpointType: privateEndpoint or specify another storage account, but using the pre-defined storageclass azurefile-csi will fail. And the automation is not easy to walk around.  
      
      I'm not sure if CSI Driver could check if the reused storage account has the private endpoint before using the existing storage account. 

              rbednar@redhat.com Roman Bednar
              wduan@redhat.com Wei Duan
              Wei Duan Wei Duan
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: