Posts

Showing posts from May, 2022

Running Multiple NGINX Ingress Controllers in AKS

Image
In some of the previous articles of this blog, we went through the process of installing NGINX as the Ingress Controller in AKS clusters. Either for applications that should be available directly from the public internet, or for applications that should only be accessed from networks considered internal. In the majority of the cases, however, and given that an AKS cluster is an environment that is designed to host multiple applications and provide economy at scale, additional ingress controllers may be required. In this post, we're going to go through the process of deploying an additional NGINX ingress controller that is going to be used for internal applications. The below diagram depicts the desired outcome: The Angular application is published via the public NGINX through the Azure Load Balancer that has been assigned a public IP. The .NET app is published by a different set of NGINX pods that are deployed in a different namespace and their ingress controller service is connect

Publishing AKS services to private networks using NGINX

Image
In one of the previous posts, we used NGINX as the ingress controller in Azure Kubernetes Service, to publish applications and services running in the cluster (article is available here ). In that deployment, NGINX was using a public IP for the ingress service, something that may not be suitable for services that should be kept private or protected by another solution. In today's post, we're going to deploy NGINX in a way that the ingress service uses a private IP, from the same vNet that the cluster is built on top. Following the usual steps of logging in and selecting the subscription to use in Azure CLI, you just have to execute the deployment script that will deploy the main.bicep file and all of its sub-resources to create the AKS platform on Azure. The next step would be to get the credentials for the cluster using the  az aks get-credentials command, as shown in the getAKSCredentials.sh  script, in order to connect with using the kubectl tool. If you get the services t

Upload your image on Azure Container Registry

Image
In my previous article , we went through the process of uploading our own image to Docker Hub. Even though it's perfectly fine to use Docker Hub, some organizations prefer using their own registries, either due to additional functionality that they might offer or due to policies and regulations. Since Azure offers a container registry resource that would fit most use cases, I thought we could try to upload some images and at least use it for test and dev purposes!  First things first, the tools we're going to need. To upload an image to an ACR we need: 1. An Azure Container Registry 2. Docker Desktop 3. Azure CLI Starting from the top of the list, we need the ACR resource to which we're going to upload the images. Clone my Github bicep repository and switch to the  ContainerRegistry-ImageUpload-001/110-Bicep/110-AzureContainerRegistry/ folder. Update the parameters in the deploy script and submit the deployment. The output should be similar to the below: If everything goe

High Performance K8s Storage with Rook & Ceph

Image
Modern applications are usually developed with cloud-native principles in mind, there are however some that may have particular requirements in terms of storage. When developing for containers, the need for a ReadWriteMany storage may arise, which may turn into a problem when cloud services fail to match the requirements, especially the ones related to performance. One of the solutions to this problem is the Rook - Ceph combination. Ceph is an open-source software-defined storage platform that implements object storage on a single distributed computer cluster and provides interfaces for object-, block- and file-level storage. Rook, on the other hand, helps perform all the administrative tasks such as deployment, configuration, provisioning, scaling, and more. A multi-zone AKS cluster is the perfect home for Rook and Ceph. The cluster is spread across three data centers and so is the data handled by Ceph. This increases the SLAs of the services running on the cluster and at the same tim