Azure Kubernetes Service (AKS) with Different Storage Options — Part 1

Jonathan
9 min readNov 11, 2021

AKS product group has worked hard on integrating AKS with Azure-native and 3rd party storage services. There are essentially 7 available options as of the moment writing this blog.

  • Azure Managed Disks (Static/Dynamic)
  • Azure Ultra Disk
  • Azure Files (Static/Dynamic)
  • Azure High-performance Computing (HPC) Cache
  • Azure NetApp Files
  • Network File System (NFS)
  • Container Storage Interface (CSI) Drivers

The simplified architecture of how AKS leverages Azure-native storage services.

Image Source: Concepts — Storage in Azure Kubernetes Services (AKS) — Azure Kubernetes Service | Microsoft Docs

Let’s try to see how they work in practice and what option is suitable for what situation.

Azure Managed Disk

We can use this table as the best practice reference and decide which option to apply in the AKS environment. Although Azure Managed Disk could provide higher throughput than Azure Files, it could NOT simultaneously have multiple clients read and write, . If there is a workload that require multi-client operations, this option would not be ideal.

Dynamic

The distinguishing attribute for Azure Managed Disks to be categorized as static or dynamic is whether the resource is created by the user or storage class. For static disks, the disk is created by the user; dynamic disks mean those are created from storage class. Storage class is a pre-defined storage template within AKS environment, it would automatically create disk resources based on the provided request. If any storage class customization is needed, please refer to this section of the article. For example, if the user chooses to provision a dynamic disk with the default setup, a standard SSD disk would be provisioned in the “MC_” resource group.

Once go through the persistent volume claim (PVC) provisioning step, the resource could be checked.

# check built-in storage class
kubectl get sc
# check the PVC provisioned from storage class
kubectl get pvc (option "-n <namespace>")

Go through the steps of creating a Pod using the PVC to ensure everything is working as expected.

Static

Essentially, provisioning a static disk is just like dynamic disk. The difference is the need of specifying the disk resource URI instead of PVC when mounting in Pod. This article would walk through everything that needs to be done.

Currently, static disks do not support zone-redundant AKS clusters as disks would need to be specified in 1 specific zone. The error message in Pod would look similar to below.

Azure Ultra Disk

Dynamic

As of now, ultra disks only support AKS clusters deployed in specific regions and zones. For more information, please check this site. Users could create a new cluster or a new node pool that supports Azure Ultra Disk. In this blog, we would create a new node pool in the existing cluster.

# update the aks-preview extension to support Azure Ultra Disks
az extension update --name aks-preview
# create a node pool that supports Azure Ultra Disks in the current cluster
az aks nodepool add -g <resource group name> --cluster-name <cluster name> --name <node pool name> --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
# create an Azure Ultra Disk storage class for dynamic creation
# reference: Enable Ultra Disk support on Azure Kubernetes Service (AKS) - Azure Kubernetes Service | Microsoft Docs
kubectl apply -f <file name of the YAML>
# check whether the storage class is being created successfully
kubectl get sc

Provision a PVC using the Ultra Disk storage class and provision a Pod using the PVC. Every step is the same as provisioning a Pod using dynamic Managed Disks.

Static

The way to provision static Ultra Disks for AKS is the same as provisioning static Managed Disks for Pods, so the demonstration would be skipped.

Azure Files

Azure Files provide lower throughput than Azure Managed Disks but they could allow multiple client interaction simultaneously.

Dynamic

By now, if you have performed the steps along the blog, you should be familiar with the concept of dynamically provisioning. Essentially, if the default storage class does not have the desired tier, users would need to create those with the provided steps. Here, I would be using the default storage class, standard locally redundant storage (LRS).

Then, create a PVC and a Pod using that PVC. If you would need reference on provisioning PVC, please check here.

Static

Follow the official documentation and create a storage account. Within the storage account, create a file share. Next, get the storage account name and access key, so it could be saved into AKS environment as secret.

Then, users could choose to mount the file share as inline volume or PV/PVC.

Inline Volume

With inline volume, we would just need to make sure to follow this section of the article to have Pod use the created secret to mount the file share.

PV and PVC

With PV and PVC, please follow this section of the article. The template of PVC would need to be revised. The “volumeName” needs to be specified to ensure the PVC is created from the correct PV.

For some reasons, the secret that stores Azure Storage Account name and access key would need to be saved in namespace “default”. Otherwise, the Pod that is mounting the static file share would fail to find the credential to access the storage account.

One thing to note is that you could actually modify the mounting permissions when creating the storage class, so whoever leverages the Pod to complete the workload does not get more permission than needed.

Azure HPC Cache

A couple of things to note when following the official article of provisioning Azure HPC Cache as PV/PVC for AKS Pods.

# install the HPC Cache extension
az extension add -n hpc-cache
# modify permissions to let HPC Cache identity to be able to access the created storage account / container
az storage container create --name <container name> --account-name <your storage account name, NOT "jebutlakestorage"> --auth-mode login
# check the resource group having the AKS-associated virtual network. Both Azure Private DNS Zone and Private Link need to be created within.
Integrate Azure HPC Cache with Azure Kubernetes Service - Azure Kubernetes Service | Microsoft Docs
# If you somehow get stuck when creating storage target within HPC Cache, you could refer to this site for creating the target from Azure portal.

Storage Account and BLOB Container

HPC Cache

Resource Group containing AKS associated Virtual Network

Private DNS Zone

Private Link

If paying attention to all points mentioned above, a Pod should be able to access the provisioned PV/PVC without issues.

Azure NetApp Files

Dynamic

In order to dynamically provision Azure NetApp Files as PVC in AKS, Astra Trident is needed as the operator, the communication broker if you will. For more information about Astra Trident, please refer to this article.

The first few commands would be installing Astra Trident in the AKS environment, but we would need update the template of its backend to ensure the broker is communicating with the right resource within the right Azure subscription. Please note that an Azure AD Service Principal (AAD SP) with sufficient permissions to manage Azure NetApp Files is required. If you need reference on AAD SP creation, please refer to this site.

# modify the Astra Trident backend template to have the correct Azure AD Service Principal credentials and Azure subscription
nano trident-installer/sample-input/backends-samples/azure-netapp-files/backend-anf.yaml
# install the modified Astra Trident backend template
kubectl apply -f trident-installer/sample-input/backends-samples/azure-netapp-files/backend-anf.yaml -n trident
# get the secret within trident namespace
kubectl get secrets -n trident
# get the trident backend config within trident namespace
kubectl get tridentbackendconfig -n trident

Once all Astra Trident related components are installed and running, we would follow the same process for dynamically provisioning any storage resource on Azure. Create storage class for the specific resource tier, PVC and a Pod using the PVC.

Static

The Azure environment needs to have NetApp provider registered.

az provider register --namespace Microsoft.NetApp --wait

Users would also need to ensure the resource group containing AKS-associated virtual network, so the subnet creation AZ CLI command could be performed without issues.

Other than that, the provisioning of PV, PVC (PV name needs to be stated in “volumeName” in the YAML file. This is covered in previous section of this blog.) and Pod are all really similar to the steps mentioned in Azure Managed Disks, Azure Files and Azure HPC Cache. Just in case you could not find the documentation for provisioning static Azure NetApp Files, the reference link is here.

PVC for Static NetApp Files

That covers 4 different Azure-native storage options for AKS. In part 2, we would be looking into Linux-based solutions and open source projects supporting other cloud platforms!

--

--

Jonathan

Started my career as a consultant, moved to support engineer, service engineer and now a product manager. Trying to be a better PM systematically every day.