Showing posts from 2019

Using Azure Data Lake to Archive, Audit and Analyse Log Files

When operating relatively big and complex environments, the ability to have all the operational information available as quick as possible is one of the key factors that protect you from downtime and breached SLAs and allow you to have a full view on the environment to act proactively. There are many cloud and on premises solutions that can be of assistance but there are some cases that require a more customized approach. Don't get me wrong, Azure OMS and other solutions like it are great for maintaining the control and reporting on your services. However, there are some organizations with needs that cannot be covered by OMS, such as really long retention periods, log file formats that cannot be directly parsed, etc. So what we need is a place to store the files and a very fast way to query them. This is where Azure Data Lake comes into play. Uploading your log files to Azure Data Lake or directly feeding the Data Lake using Azure Stream Analytics will give you the ability

Quering Active Directory using PowerShell

Active Directory query. Every Windows administrator has had the need to get a list of objects using some kind of criteria to create a report or update them in one batch. Fortunatelly, Microsoft provides a PowerShell module to interact with Active Directory as part of the RSAT tools and this module is installed by default on the Domain Controllers. The commands in this module interact with the Domain Controller using the Active Directory Web Services. But what if you are not logged on to a Domain Controller or you don't have RSAT installed? There is a way to query the Domain Controller and get the information you want, without the limitations of the Web Services and in a much faster way using .NET. First, we have to create a DirectorySearcher object and configure it's LDAP filter. Calling any of the find methods will return the results for the specified filter. On the following example, I'm using FindOne() to get my account. Keep in mind that you can configure t

Domain Controller Machine Password Reset

On my lab environment, I've configured two Active Directory sites since most enterprises have offices in more that one places. My lab however is not running 24/7 and the domain controllers in the second site are rarely turned on in order to save resources. This leads to issues with the Active Directory replication such as the "The target principal name is incorrect" error when I execute:  repadmin /syncall /AdeP. To remedy the issue, we have to reset the machine password of the domain controller that has been offline. First off, we are going to stop and disable the Kerberos Key Distribution Center (kdc) service on the problematic domain controller, in our case DC4. There may be some tickets in the cache so we should also clear them using klist purge Now it's time to change the machine password of the domain controller using the command netdom resetpwd /s:dc3 /ud:lab\administrator /pd:* Replace the "lab\administrator" with an account on your

Exchange Request Tracing

I came across a very strange Exchange behavior the other day while troubleshooting a full access permission that was not working as expected. Although a user had been granted the full mailbox permission on a shared mailbox, when he tried to open it using OWA, he got an HTTP Error 500 message and the request failed. We'll start troubleshooting with investigating the front end IIS log files. After all, that is the first step of the request processing. Using the user's UserPrincipalName, I've managed to find the error in the log: As you can see, the HTML error code is "500" that indicates an internal server error similar to the one that the user encountered. This file however does not provide much information about the cause of the error so we'll take a look on the backend as well. After each request reaches the front end Exchange layer, it is proxied to the back end but the destination server may be other that the front end server that receivced i

DNS Query Web Interface

DNS plays one of the most important roles in IT, there's no doubt about it. Especially when you have services hosted on public clouds or accessible on the internet. When troubleshooting issues with such services the DNS configuration and propagation should always be checked since any issues there would definately have an impact on the service. Although you can use the tools provided by your operating system such as nslookup, dig and Resolve-DNSName, it can be a bit complicated to get the right query. Fortunatelly, there are websites out there that can help you by providing a frienly user interface. The website I'm using the most is Dig Web Interface , let's take a quick tour. This site has a minimal design, with a textbox to enter your hosts or IPs and a few options about the query and the name server to use: Let's go through some example queries. To search for the name servers of a zone, use the "NS" type: As you can see, my domain is hosted o

Building a PowerShell cmdlet using C# - Part 4: Packaging the Cmdlet

This is the fourth and last article of the series on how to build a PowerShell cmdlet. On the previous articles we created a simple cmdlet, troubleshooted the code and took a quick tour on the output streams. Although you can import the dll built by Visual Studio directly as a module, this is not the best approach, since you won't be able to publish it, provide help for your cmdlets and many other features provided by PowerShell modules. To create a module we'll start by creating a folder with the same name of the module. This is the folder that is going to hold all the module files. We'll call this module DemoCmdlet, since this is the name of the Visual Studio project and the dll produced. All modules require a module manifest so let's create one with the New-ModuleManifest cmdlet: PS C:\DemoCmdlet> New-ModuleManifest -Path DemoCmdlet.psd1 PS C:\DemoCmdlet> ls     Directory: C:\Users\cpolydorou\OneDrive\tmp\Blog\DemoCmdlet Mode   LastWriteTime Leng

Get the filesystem hierarchy using PowerShell

When working with command prompts - especially on systems without a graphical user interface, getting a view of the filesystem structure under a directory can be an issue. We usually have to get the files in the directory and then recursively process all it's subdirectories. When it comes to PowerShell, I've created a simple function to do exactly that. To demonstrate the usage of the cmdlet, I have created a test application under "C:\Program Files\MyApp". To get a quick look at the directory structure of the application we just have to execute: Display-DirectoryTree -Path 'C:\Program Files\MyApp\'   Each level is indented in order to provide the level feel. To display the type of each object, use the  -IncludeType  parameter. This will prefix each object with a "D" for a directory or an "F" for a file: By default, the tab character is used in the indentation of each level in the hierarchy.  You can select the string using

How To Join a CentOS 7 machine to an Active Directory domain

Joining a linux machine to an Active Directory domain is an uncommon task, but I have run across it a few times. The increasing popularity of linux will sooner or later attract more windows administrators and users and more machines will be joined to Active Directory domains. Back in the day, to join the domain we had to do a lot of configuration file editing, many packages that had to be aligned and a lot of luck was a requirement! Fortunately, this process has been reduced to a handfull of commands! Let's see it in action. The first step is to install the necessary packages. yum install sssd realmd oddjob oddjob-mkhomedir adcli samba-common samba-common-tools krb5-workstation openldap-clients policycoreutils-python -y Give it some time to download and install the packages and their dependencies and you should end up with something similar to the below: Another requirement is that the machine has to be able to resolve the domain DNS records. Check your  /etc/

Updating Exchange Server Certificates

Microsoft Exchange is one of the applications that's installed on almost every company's IT infrastructure and as all applications should, it uses SSL to secure network communications. SSL uses certificates and sooner or later they all expire. Below is the process I usually follow when updating the certificates on multiple servers. We'll start by creating a variable that will hold the thumbprint of the new certificate: $newCertificateThumbprint = "3A5F93553E8346618131DA97CAE6E3962C266608" Then we are going to copy the pfx file to all the servers: $servers = Get-MailboxServer | % Name | Sort-Object $servers | %{ $destination = '\\' + $_ + '\c$\Temp\' Copy-Item -Path "C:\Temp\Cert\Certificate2019.pfx" -Destination $destination -Verbose } Now that the pfx is available on all servers, we are going to import it to the local computer certificate store using the below command: Invoke-Comman

Building a PowerShell cmdlet using C# - Part 3: Using the Output Streams

Now that we've seen how to setup the code base for a cmdlet and how to debug it, let's take a look on how it will interact with the world via the various output streams. Verbose Output - WriteVerbose() One of the most useful functions of every cmdlet. Provides detailed information about the execution of the command. Debug Output - WriteDebug() The latest versions of PowerShell give the ability to debug to the end users. The -Debug  parameter along with the WriteDebug  method, pause the execution at specific points so that the end user can debug it. Warning Output - WriteWarning() It goes without saying that if any issues arise during the execution of the cmdlet, those should be reported to the user. For minor issues that do not have an impact on the result of the command, use the WriteWarning method. Errors - WriteError() However, when there are issues that do have an impact on the result, we have to inform the user about them. Things get a littl