Azure Kubernetes Service (AKS) with Different Storage Options — Part 2
In part 1, all Azure-native storage options have been introduced. So, in part 2, we would look into options other than the ones Azure provides.
Network File System (NFS) Server
If you do not have much knowledge around NFS, please refer to the wikipedia. NFS now has 4 versions. Basically, NFS is designed for users/computers to access files over network just like access files on local storage.
This article covers pretty much all the details for setting up a NFS server in the same virtual network as AKS. If anything needs to be mentioned, that would be the setup NFS export file path, make sure to put down the one you added into “/etc/exports” inside the Linux server. Another thing is that there is no way for dynamically provisioning a space from NFS server for AKS.
The server private IP could be checked from Azure portal.
The rest of the operations are really similar to Azure Files — Static. Ensure PV is connecting to the server and export file path without issues; ensure PVC could be provisioned successfully from PV; ensure the Pod using the PVC is up and running.
Container Storage Interface (CSI) Drivers
CSI is the interface to expose file systems to containerized workload in container orchestration like Kubernetes. It acts as the interface for developers to come up with new plug-ins without the worry of touching core codes. To understand more how you could contribute to the community, please check their GitHub site. The section provides the production drivers developed and maintained by each enterprise.
Azure Disk
AKS clusters that are running on versions over 1.21 would by default only be using CSI drivers; for those running on version under 1.21, please refer to this site for enabling it on a new AKS cluster. If you are not creating a new AKS cluster but instead installing CSI drivers on an existing one, please follow the steps here.
The screen capture of all installed resources up and running.
Ensure AKS user managed identity is being added into the “MC_” resource group as contributor. The name of the user managed identity should follow the naming convention of “<AKS cluster name>-agentpool”. You could choose to use the following AZ CLI command to figure out the identity client ID/object ID
# get the AKS user-managed identity client ID and object ID
az aks show -g <resource group name> -n <AKS cluster name> | grep kubeletidentity -A 5 -B 5
Alternatively, you could just head into “MC_” resource group and look for user managed identity. The information is being mentioned here.
Another important step is to make sure a CSI storage class is being created within the AKS environment. The command is already provided here on the very end of the article. A storage class of “managed-csi” would be added into your AKS environment if performed. For creating other customized storage class, please refer to this section.
After that, everything follows the normal process of dynamically provisioning Azure Disks, which is provisioning a PVC and a Pod using that PVC.
Azure File
Dynamic
First things first, we would need install Azure File CSI drivers on the AKS cluster. If you are creating a new cluster with Kubernetes version lower than 1.21 and also installing the CSI drivers, please refer to this section; if installing CSI drivers on an existing cluster, please refer to this site.
Confirm the installed resources are up and running.
Just like using regular Azure Files as AKS storage, if the PVC needs to be provisioned dynamically, a storage class needs to be in place. Details are provided here and here.
A PVC and a Pod using that PVC need to be provisioned. For details, please follow this section.
Private & Static
Users would need to provision a storage class (SC), a PVC and a Pod using the PVC in order to use CSI drivers to dynamically mount private file shares in AKS environment. Detailed included in this section.
NFS
For the most part, users could follow the official documentation to leverage CSI drivers to provision Azure Files with protocol NFS. However, the user managed identity (naming convention with “<AKS cluster name>-agentpool” located in “MC_” resource group would need to have sufficient permissions to alter AKS associated virtual network. In my testing environment, the virtual network is located in another resource group so the access control needs to be revised.
Confirm all created resource are up and running without issues.
That would be the general coverage for all supported storage options for AKS at of the moment writing this blog! One thing for sure is that CSI would become more and more mainstream as it is friendly for developing plug-ins and that allows a lot of enterprise users to customize it to their own fit! Happy learning!