Showing posts from 2021

Monitoring Hosts and Domains for RBL Listing Using Azure - Part 2: Deployment

In the previous post of the series ( here ), we went through the design of a solution to help monitor host and domain listing in RBLs. In this article, we'll go through the process of deploying and configuring the required resources on Azure. To deploy the solution to your Azure subscription, you have to perform two tasks: Deploy the Azure resources, that is the Storage Account and a Function App Deploy the Azure Function App application code To deploy the Azure resources, you have to submit an ARM deployment task using the ARM template file saved in the repository ( here ). There are two ways to create a deployment using this file: Deploy to Azure button   The main repository page contains a button that opens the Azure portal and prompts for parameters for the deployment: Leaving the default values will result in a deployment in the same location as the resource group and randomized resource names. ARM File Deployment The other way to deploy an ARM file is to use the Azure Powers

Monitoring Hosts and Domains for RBL Listing Using Azure - Part 1: Design

The thing that prompted the publishing of this series of posts is a recent case of a customer with a large mail platform. The Mail Transfer Agents handling the internet mail flow for such a platform are usually susceptible to being flagged as malicious on RBLs, causing issues with mail flow. The solution described in these posts helps monitor the status on the RBLs and trigger alerts in service management systems, using modern cloud application development techniques. But wait, what is an RBL? A Real Time Block list is a service that keeps track of the domain and/or the IP address of the hosts reported to be sending SPAM or malicius messages. RBLs are actual DNS zones where IP addresses or domain names are represented by A records. Let's take a list, say, for example. To check if the host with IP is listed we have to query the DNS for the host If there is no such host known (the responce is NXDOMAIN) the host

Deploy Bicep/ARM using Containers

Developing using containers offers the ability to use different environments and software without having to install them on your development workstation, while at the same time you conserve a lot of system resources avoiding the use of virtual machines. Visual Studio Code has been receiving a lot of functionality towards that end with a great deal of extensions and part of that is targeting Azure template developement.  We can use VS Code to develop our code and then use a container with all the Azure tools to deploy. This post describes this very process, from docker installation to VS Code configuration and finally template deployment.  First we need to install all the necessary components on our system. This includes Docker Desktop, Visual Studio Code and a patch for the linux kernel of Windows Subsystem for Linux (WSL 2). You can find Docker Desktop on the official Docker page here  and VS Code on its page over here . The Docker installation wizard will add any required Windows fea

Reporting Progress with Powershell

Often in the life of an IT engineer or administrator, the command shell is a means of performing an operation on multiple items that would otherwise take a significant amount of time if performed via GUI tools.  When planning for such operations, reporting on progress is a nice thing to have, since it may take a while for the entire operation to complete and we need to know if things are working out as expected. In the majority of the cases, we know the actions that need to be taken in advance and thus the easiest way to go through them is to use a For loop. With For we start processing the items in an array or list either from its start or end and work our way to the last item by incrementing, usually by one. The Powershell code below will get the items in the current directory and sequentially process them, reporting the status of the operation after each item: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 # Get the list of items to process $items = @ ( Get-ChildItem ) # Loop thr

Merging Commits in Git

The adoption of cloud technologies has introduced a new way of managing resources, whether they are related to infrastructure, security, applications, etc. In the past, code repositories such as Git were just used manage application code, but now we use them to manage pretty much everything. However, not everyone is familiar with repositories and their principals and that sometimes leads to issues. A very common practice I've come across many times lately is having too many commits related to a single feature or request. I'm not suggesting commiting only big chunks of code that contain many changes at once, but instead getting rid of multiple commits that each one may overlap with the previous one or just contains minor changes. At least when pushing to a remote source that other people my be watching.  The three most common ways to clean up commits are reset , merge and rebase . For demonstration purposes we'll be working with a repository that contains Azure ARM template

Accessing Azure Resources via Private Endpoints

In the era of the cloud, we are creating resources that in the majority of the cases are exposed to the public internet, lowering the security posture of the solution. Resources such as Virtual Machines, Database Servers, etc. can be easily left unprotected when the strategy and governance rules are not followed. The storage account resource, for instance, when configured with a public endpoint, is reachable from all networks, including public internet. If we try to resolve the name of one of the endpoints of the storage account - let's take file for example - we'll be directed to a public IP address: This can be considered a security risk since we are not only exposing the storage account to the entire internet but also allow data to traverse networks that we have no control over. To avoid this risk, we can configure a Private Endpoint for the storage account on the VNet that the clients are connected to. When using a Private Endpoint, an interface to the storage account will

CPolydorou.HostsFile Module Updates

This post is to notify you of the recent changes in my HostsFile module. As part of the process of updating the Powershell modules I've published on the PSGallery repository to target a later version of .NET Framework, the CPolydorou.HostsFile module has been updated as well. The newest version of the module - that is 1.2.1 - now requires .NET 4.6.2 that comes pre-installed with Windows Server 2016. In case you are still using the module on operating systems older than that, you can always install the 1.1.1 version that is the latest that is built for .NET Framework 4.5. However, this version will not get any updates or bug fixes in the feature.  If you're installing the module on a new system and require the old version, you can install it using the below command: 1 Install-Module -Name CPolydorou.HostsFile -RequiredVersion 1.1.1

Designing Solutions for newer Azure Regions

Microsoft Azure keeps growing and new regions are being announced and released one after the other, exceeding 60 at this time! New regions usually draw customers that are geographically located near them, with decreased latency being the key factor. Apart from new customers, large organizations usually move some of their workloads to the new regions for the same reasons. When designing a solution that is going to be deployed or extended to a recently released Azure region, you should always make sure that the resources that are part of your solution are available in that particular region. To make the life of the architects easier, Microsoft has created a webpage that provides service availability information and is available  here . This page will not only show you the regions that a  service is available from, but will also allow you to add all the components of your solution and confirm it's availability as a whole. Let's take for example a solution that comprises of Azure F

Additions to the CPolydorou.Security Powershell Module

This post has been triggered by a project that I'm currently working on that involves nginx and containers. As part of the nginx configuration, I had to create a certificate key pair that was going to be used in order to secure traffic towards nginx. The challenge I faced was to convert the PFX certificate that was handed to me by the Certificate Authority team to the format nginx understood. Considering that this was a process that I'd followed many times in the past (and also blogged about), I decided to update a Powershell module of mine named CPolydorou.Security in order to make the use of OpenSSL friendlier to the Windows administrator.  The four new functions that are included in the latest version ( 1.2.0 ) are: Export-ServerCertificateFromPFX Export-CertificateChainFromPFX Export-PrivateKeyFromPFX Decrypt-PrivateKey Let's go through them one-by-one to see how they can help! For the examples demostrated below, I've created secure string objects for the passphras

Configuring Virtual Machines Using Desired State Configuration - Part 6 - Azure Automation

Continuing the post series about Microsoft DSC and a long break from Azure Global, we are going to see how Azure Automation Accounts help with DSC configurations on Azure Virtual Machines. First, we are going to deploy an automation account and then we're going to register a Windows Server VM to it so that it gets and applies our configuration. To deploy an Automation Account, navigate to the Azure Portal and search for "Automation". Select the Automation offering from Microsoft and click "Create". You will then have to select a name for your automation account, the subscription and resource group to deploy to and the location. When done with the deployment options, hit create to submit the deployment. The process should be much like the following (hover to animate): Now that we have created an Automation Account, we need to upload and compile a configuration, so that it

Configuring Virtual Machines Using Desired State Configuration - Part 5 - Creating a Pull Service

Hello and welcome to an article on how to create your own Powershell DSC Pull Service. This is the fifth article of the series and things are starting to build our own DSC infrastructure.  To configure the DSC Pull Service we are going to need a windows server machine running at least Powershell v5 and a certifite. For the purpose of this post, I've created a dedicated machine and the certificate that I'm going to use is issued by my Active Directory Certificate Services. What would be the easiest way to create a Pull Service? DSC of course. We are going to push a DSC configuration to our machine that is going to convert it to a DSC Pull Service! First off, we're going to need that certificate installed in the Personal container of the machine. I prefer using my CA to issue certificates since it is trusted by all the domain member machines and for this one I've enrolled for a certificate based on the Web Server template: You can use any web server certificate as long as

Configuring Virtual Machines Using Desired State Configuration - Part 4 - Applying Configurations

Welcome to the fourth article of the Desired Configuration series! Today we're going to discuss more on the ways you can apply configurations to virtual machines and how to examine the LCM verbose output. Without further ado, let me present the two ways you can apply configurations: Push and Pull. Push The push method is the simplest, we just send the configuration to the node and from then on the node has to act accordingly. We just sit and watch. Let's go through the process of pushing a configuration to a node. I'm going to be using my domain controller machine as the management host to avoid touching the node at all. First we have to put together the configuration itself. Here we are going to use a configuration from one of the previous posts that installs the Web-Server role and copies a web page file. We'll confirm that the Web-Server role is not installed on the node using the Get-WindowsFeature cmdlet: Great, IIS is not installed. Now, the default LCM configura