In a Kubernetes cluster, the resources we deploy are assigned IPs from the cluster's network that makes them unreachable from other networks. The most common way to expose an app to the world outside the cluster is to create a service. The service will load balance the traffic across pods and will also bridge the gap between the two worlds (cluster and outside network) using a concept much like NAT.
This, however, does now allow us control over how the app is published, from which pods, etc. This is where ingress controllers come into play. An ingress controller is a way to publish apps having full control on how each and every component is being accessed. Compared to traditional application deployment, we could say that the role and functionality of the ingress controller are much like those of application delivery controllers such as Citrix ADC and F5 BigIP.
The below diagram shows the basic functionality of an ingress controller:
In this example, we have three namespaces configured in an AKS cluster, one for the blue app, one for the orange app, and one for NGINX.
As traffic arrives at the virtual IP assigned to the Azure Load Balancer that's part of the AKS cluster, it is passed to the NGINX ingress controller pods. Then it is directed to the appropriate application service based on the ingress resources configured. We are still using K8s services but their type is ClusterIP which does not expose them to other networks and acts as a single point of access.
The most common scenarios for using an ingress controller are separating traffic based on the hostname and the path of the request. Let's assume for example that the above namespaces contain two separate applications app1.domain.com and app2.domain.com. Since we're using a single IP for all of our traffic, we have to distinguish between the two.
Now that we've covered the basics, it's time to see it in action! I've published all the required files on my Github
repository, from the Azure templates to the K8s deployment files, so that you can deploy your own test cluster and test the functionality of NGINX.
First, we need to deploy an AKS cluster on Azure. For this, I'm going to be using the Azure CLI to submit a Bicep deployment that will create all the necessary resources. The template files are available in the
101-Bicep-Templates folder. Executing the
deploy-manual.sh script in Azure CLI will create a test environment for you to use.
When the deployment is completed, get the credentials for the AKS cluster using the
az aks get-credentials command and the name and resource group of your cluster. The repository contains a
script that does exactly that.
The
201-K8s-Deployments folder of the repository contains all the K8s-related files. Starting with the folder
101-K8s-NGINX that contains all the files for NGINX, we'll create the namespace, execute the helm chart and then wait for the pods to be ready:
The deployment will create a service (the green arrow in the above diagram) for NGINX of type LoadBalancer as shown below:
This means that the IP address 51.124.1.76 is assigned to the Azure Load Balancer that is part of the AKS managed cluster. Reviewing the frontend IP configuration of the load balancer, we see that the IP has indeed been assigned:
This is the IP that all of the application traffic should be directed to, regardless of the configuration inside the cluster.
Moving on to the application related deployments, apart from the namespace and the application deployment, we also need a service and an ingress:
Taking a closer look at the ingress that has been created, the public IP shown is the same as the IP address of the NGINX service, which is expected since all the traffic will go through the NGINX service:
Now if we add an entry in our system's hosts file for the host app.domain.local with the IP 51.124.1.76 and open the site, we get the default page of the application:
I've configured the main page of the app to show the hostname of the container so that when testing configurations, I can see the pod I've reached. If you take a closer look at the name shown on the test page, you'll find that it matches the name of our application pod from a couple of screenshots above!
If you deploy the same application but from the
202-K8s-App-001 folder, a copy of the application will be deployed in a separate namespace. Part of that deployment is also a new ingress - the same as the previous one but for a different host -
app2.domain.local instead of
app.domain.local:
Both ingresses have the same address, something that is expected. Again, if we add an entry in our hosts file for the app2.domain.local host, we will access a similar page but this time the hostname will be different:
You have now managed to direct traffic to different K8s services based on the hostname of the request!
Microsoft has published a guide on how to install NGINX on AKS that is available
here.
Important Note
If you'd like to manage HTTPS traffic based on the host header without offloading you need to deploy NGINX having SSL Passthrough enabled. This requires an additional parameter on the helm install command, just as I have it in my repo.
Important Note 2
The helm command that deploys NGINX in my repo is configured to use selectors in order to deploy the pods to machines in a specific node pool. If you're deploying to your AKS cluster you can either manually create a node pool as defined in the Bicep file, or remove the node sector lines from the helm command.