Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-18701

[Azuredisk-csi-driver] allocatable volumes count incorrect in csinode for Standard_B4as_v2 instance types

XMLWordPrintable

    • Important
    • No
    • Rejected
    • False
    • Hide

      None

      Show
      None
    • Hide
      Previously, the Azure Disk CSI driver was not properly counting allocatable volumes on certain instance types, exceeding the maximum. This caused the pod to fail to start. With this release, the count table for the Azure Disk CSI driver has been updated to include new instance types. The pod now runs and data can be read and written to the properly configured volumes. (link:https://issues.redhat.com/browse/OCPBUGS-18701[*OCPBUGS-18701*])
      Show
      Previously, the Azure Disk CSI driver was not properly counting allocatable volumes on certain instance types, exceeding the maximum. This caused the pod to fail to start. With this release, the count table for the Azure Disk CSI driver has been updated to include new instance types. The pod now runs and data can be read and written to the properly configured volumes. (link: https://issues.redhat.com/browse/OCPBUGS-18701 [* OCPBUGS-18701 *])
    • Bug Fix
    • Done

      Description of problem:

      [Azuredisk-csi-driver] allocatable volumes count incorrect in csinode for Standard_B4as_v2 instance types

      Version-Release number of selected component (if applicable):

      4.14.0-0.nightly-2023-09-02-132842

      How reproducible:

      Always

      Steps to Reproduce:

      1. Install Azure OpenShift cluster use the Standard_B4as_v2 instance type
      2. Check the csinode object allocatable volumes count
      3. Create a pod with the max allocatable volumes count pvcs(provision by azuredisk-csi-driver)

      Actual results:

      In step 2 the allocatable volumes count is 16.
      $ oc get csinode pewang-0908s-r6lwd-worker-southcentralus3-tvwwr -ojsonpath='{.spec.drivers[?(@.name=="disk.csi.azure.com")].allocatable.count}'
      16
      
      In step 3 the pod stuck at containerCreating that caused by attach volume failed of 
      09-07 22:38:28.758        "message": "The maximum number of data disks allowed to be attached to a VM of this size is 8.",\r

      Expected results:

      In step 2 the allocatable volumes count should be 8.
      In step 3 the pod should be Running well and all volumes could be read and written data

      Additional info:

      $ az vm list-skus -l eastus --query "[?name=='Standard_B4as_v2']"| jq -r '.[0].capabilities[] | select(.name =="MaxDataDiskCount")'
      {
        "name": "MaxDataDiskCount",
        "value": "8"
      }
      
      Currently in 4.14 we use the v1.28.1 driver, I checked the upstream issues and PRs, the issue fixed in v1.28.2
      https://github.com/kubernetes-sigs/azuredisk-csi-driver/releases/tag/v1.28.2

              rhn-support-tsmetana Tomas Smetana
              rhn-support-pewang Penghao Wang
              Penghao Wang Penghao Wang
              Shauna Diaz Shauna Diaz
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: