Azure Kubernetes Service (AKS) with Kubernetes Event-Driven Autoscaling (KEDA)

Jonathan
4 min readNov 25, 2021

Kubernetes Event-Driven Autoscaling (KEDA) is a self-explanatory term. It is a solution jointly developed by Microsoft and Red Hat to provide another way to scale the Kubernetes serving Pods besides using resource utilization such as CPU and memory.

We would need several components in place before we could even start testing the functionality.

  • Install KEDA on AKS cluster. This section would provide clear guidance on how to install it via HELM, but you could also choose to install this via YAML files.
  • To test the event-driven mechanism, it could be Azure Storage — Queue, Azure Service Bus or any other supported scalers. Click on the links to go through the service creation process. The screenshots below are the places you could get the required information when setting up KEDA solutions.

To obtain Azure Storage Connection String

To obtain Azure Storage — Queue

To obtain Azure Service Bus Connection String

To obtain Azure Service Bus — Queue

  • KEDA GitHub provides examples written in different languages, but I have only make the ones written in Go work so far.

RabbitMQ consumer and sender: This example is using RabbitMQ for message sender and consumer. One thing to note is that it needs to be deployed in the default namespace. For some reasons, deploying into other namespaces would not work.

The magic happens in deploy-consumer.yaml. The ScaledObject (rabbitmq-consumer) would get triggered by TriggerAuthentication (rabbitmq-consumer-trigger) and scale the deployment (rabbitmq-consumer). Please refer to this site for more details. At the end of day, you would need to setup a secret that stores required credentials such as Azure Storage Connection String and Queue Name, a TriggerAuthentication to refer to the secret and a ScaledJob or ScaledObject along with Deployment/Statefulset/CustomResources.

If interested in how to setup KEDA in your AKS clusters, examples for different scalers are already included in the official documentation.

Secret

TriggerAuthentication

ScaledObject

Deployment

Make sure RabbitMQ consumer components work.

# install RabbitMQ with HELM chart
helm install rabbitmq --set auth.username=user --set auth.password=PASSWORD bitnami/rabbitmq
# Deploy the essential components (secret,triggerAuthentication etc.) of RabbitMQ consumer
kubectl apply -f deploy/deploy-consumer.yaml

Immediately after the RabbitMQ publisher job is being deployed to the environment,

kubectl apply -f deploy/deploy-publisher-job.yaml

the RabbitMQ consumer Pods spawned and started to handle the workload.

KEDA Azure Storage Queue Trigger: This example is using Azure Storage Queue for sending and consuming messages. Besides the scaler difference, this example is also using ScaledJob instead of Scaling Deployment with ScaledObject.

Secret

TriggerAuthentication

ScaledJob

Be aware that both Azure Storage Connection String and Queue need to be updated in the “deploy-consumer-job.yaml” (The top 2 fields need to have base64-encoded value. The bottom one could just use plain text).

# to generate base64-encoded value ("-w 0" is to not have any next line or space after the value is generated)
echo -n "value" | baste64 -w 0

When running the Go command “go run xxx” in Linux environment, situations such as the environment do not recognize Go-related references could be faced. So, users would need to manually set the environment path and refresh the user profile.

# make sure the path to Go executable is added in the default system environment path.
export PATH=$PATH:/usr/local/go/bin
# make sure to refresh user profile
source $HOME/.profile
# check whether this command works
go version

After running the command (on the left side of the screen capture below)

go run cmd/send/send.go <number of messages you would like to send)

users would need to wait until the ScaledJob is triggered (on the right side of the screen capture below).

That pretty much gives users a taste of what KEDA is and how it works. This is really useful when having solutions that are event-driven as Kubernetes would not need to always rely on the resource consumption as the indicator of scaling! Hope this is helpful! Happy learning!

--

--

Jonathan

Started my career as a consultant, moved to support engineer, service engineer and now a product manager. Trying to be a better PM systematically every day.