Azure Kubernetes Service (AKS) with Different Ingress Options Part 1
HTTP Application Routing Add-On
HTTP application routing add-on provides administrators an easy way of exposing applications with HTTP setup to external environment. Behind the scene, deployments and services would be created within the cluster under namespace “kube-system”.
The Load Balancer service associated with this add-on would also be associated with an Azure DNS zone that has the default fully qualified domain name (FQDN) Azure provides as “<random-generated string>.<Region>.aksapp.io” in resource group “MC_<RG name>_<AKS cluster name>_<Region>”.

Check whether HTTP Application Routing Add-On is Enabled
- az aks show -g <RG name> -n <AKS cluster name> | grep addonProfiles -A20

If “enabled” is not “true” under the “httpApplicationRouting”, it could easily enabled by
- az aks enable-addons — resource-group <RG name> — name <AKS cluster name> — addons http_application_routing
Check whether Corresponding Components are Up and Running
- kubectl get deploy,pod,svc -n kube-system | grep addon-http-application*

Deploy an Application that Uses HTTP Application Routing Add-On
We would use the sample YAML file provided by the official documentation to see how the whole setup works. One thing to note is that we should replace the host configuration with some name we desire and the default FQDN Azure provides for this service.

Simply create the deployment, service and ingress through one command if you are following the documentation.
- kubectl create -f <whatever file name you choose>

Then, we try to visit the just created application from any external environment.
- kubectl describe ing <custom name you provided for ingress in the YAML>

An “A Record” and a “TXT Record” would be added in the associated DNS zone.

Visit the site with any browser.

NGINX Ingress Controller
External Access
NGINX Ingress Controller could be installed through HELM provided in Azure official documentation or through NGINX official documentation. I would be using the one liner provided in the NGINX official documentation for installing everything needed for NGINX Ingress Controller to run on AKS. A namespace called “ingress-nginx” is being created and everything related with NGINX Ingress are located within the namespace.
- kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml

Follow through the steps of “Run demo applications”, deployment, service and Ingress should be created. For the purpose of management, I put everything under the namespace “ingress-nginx”.

Test accessing both application with curl.

Use Self-Signed Certificate for Applications
The steps for creating HTTPS applications with NGINX Ingress is similar to the steps for creating HTTP applications. The only difference is that we would need to generate self-signed certificates for our applications in order to have the application communicate through HTTPS.
Follow through the steps of “Generate TLS certificates” in Microsoft official documentation and we would be seeing 2 files, one with “crt” as file format and the other as “key”.

Create a TLS secret with the newly created files.

Configure the Ingress YAML file to leverage the newly generated TLS secret.

Test accessing both application within NGINX Ingress.
- curl https://demo.azure.com — resolve demo.azure.com:443:<NGINX Ingress Public IP> -k
- curl https://demo.azure.com /hello-world-two — resolve demo.azure.com:443:<NGINX Ingress Public IP> -k

Check whether both applications are using the self-signed certificate for HTTPS communication.
- curl https://demo.azure.com — resolve demo.azure.com:443:<NGINX Ingress Public IP> -kv

Use Let’s Encrypt Certificate
As I thought this would just be following through another step-by-step documentation, this setup configuration actually takes me 2 full days to really get to the point of understand how everything works.
Everything would be performing as mentioned in the documentation until the step of creating Ingress controller Pods.
- kubectl get secret,deploy,pod,svc,ing -n <Ingress namespace>

You might encounter the situation that the
- Pods’ status is showing “CrashLoopBackOff”
- when describing Pods of Ingress controller, you are getting “IngressClass with name nginx is not valid for ingress-nginx (invalid Spec.Controller)…”.
This is because the Ingress class is not set up correctly. If we edit the Ingress class to use “k8s.io/ingress-nginx”, the issue would be mitigated.
- kubectl edit ingressclass nginx

Another thing to do before moving forward is to delete all the unnecessary validating web hook configurations. For this scenario, all objects that are related with NGINX Ingress controller admission are not necessary. Otherwise, it is highly like to hit the error message like
“Error from server (InternalError): error when creating “STDIN”: Internal error occurred: failed calling webhook “validate.nginx.ingress.kubernetes.io”: Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: service “ingress-nginx-controller-admission” not found”.
- kubectl get validatingwebhookconfigurations
- kubectl delete validatingwebhookconfigurations <object name similar to nginx-ingress-admission, nginx-ingress-controller-admission …>

Once those 2 actions are done, continue on creating cert-manager, CA cluster issuer, demo application and Ingress in the official documentation. At last, if you get all the resource created in the target namespace, you would get similar result as below.

Finally, we could give it a test on accessing all the application endpoints. For my testing environment, the Ingress public IP is associated with “jonwaksnginxingress.westus2.cloudapp.azure.com”.

- curl https://jonwaksnginxingress.westus2.cloudapp.azure.com/hello-world-one -k
- curl https://jonwaksnginxingress.westus2.cloudapp.azure.com/hello-world-two -k

Internal Access
If you are like me that would like to have multiple NGINX Ingress within the environment, when you follow through this documentation to create an internal NGINX Ingress in namespace “ingress-internal”, you would encounter error messages such as
“ Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole “nginx-ingress-ingress-nginx” in namespace “” exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key “meta.helm.sh/release-namespace” must equal “ingress-internal”: current value is “ingress-nginx ”.
Since I am no HELM warrior, I prefer to head over to the NGINX Ingress controller official installation guide regarding Azure deployment. If you copy out the raw YAML file URL from the one liner, you could see all the components needed for running a working NGINX Ingress. However, this is for external facing. We would need to make some changes accordingly.

If you follow NGINX Ingress controller official installation guide, the default namespace being created should be “ingress-nginx”. If you are already using that name for external NGINX Ingress, it should be switched to another name. One quick way to update the namespace for all component is by using replace in Notepad++.
- ctrl + F → Switch to “Replace” tab
- Find what: “namespace: ingress-nginx”
- Replace with: “whatever namespace name”
- Save the modified file and use it to create the new NGINX Ingress for internal service communication
kubectl create -f “modified-file.yaml”

Luckily, we get some ideas from the Microsoft official documentation, saying that we would need to add an annotation and Load Balancer IP in the YAML file, so we would just need to locate those 2 places after creating the NGINX Ingress controller.
- kubectl edit svc ingress-nginx-controller -n <whatever namespace>


Then, check the NGINX Ingress controller service with K8s CLI.

Now, we could create the demo applications and go for a test. Since this is an internal NGINX Ingress, we would need to create a Pod in the same network environment to test out the functionality. By following the steps here, we should be good to execute testing commands within a shell inside a Pod.

Since there is a whole lot more around using Azure Application Gateway (AzAppGw) as AKS Ingress controller, we would be doing all the step-by-step installation and testing in the 2nd part!