Gathering hardware inventory across a network

The script in this post is one I’ve been meaning to write about for a long while, because it is an example of a script automating a task that would be mind-numbingly boring and time-consuming to do by hand.

For context – consider an organisation with a distributed network of around 100 locations. Multiple AD domains are present across the domain, with a server per location running DHCP and DNS services. The environment is significantly heterogeneous, constituting a mix of personal and organisational devices – smartphones, VOIP phones, tablets, laptops, desktops. In this environment, no central record of existing hardware profiles or maintenance cover exists for these devices.

The immediate instinct is very often “Get a tool to do it, I like “. And yes, there are some nice tools, free and paid-for, that can do this. Now imagine you’re trying to do this in as little time as possible (ideally by the end of the current week, say), leveraging only the systems already deployed. And you either have, or can trivially deploy, the Group Policies and host firewall rules required to allow remote PowerShell for authorised sources. (Or, if you’re stuck with now-obsolete operating systems like Windows 7 or Server 2008 R2, executing the equivalent Get-WMIObject queries via wmic…)

You could do worse than something like this script. This script covers the hardware inventory collection. It has a companion script which is now sadly very obsolete whose function was to parse the output file generated by this script and use the vendor API to check for useful information like purchase date, warranty cover type, and warranty expiration. On one hand, this script should only ever be viewed as a stepping stone – because proper inventory management should be based on information being in a database system that can generate reports and automatic notifications about interesting or important events. But on the other hand, a spreadsheet might be a terrible tool for tracking inventory – but it’s at least less bad than not tracking inventory at all.

This version doesn’t really focus on the hardware specifications of the identified equipment, but it could easily be expanded to include this e.g. if you were also trying to evaluate hardware suitability for an OS or software upgrade. The Win32 classes OperatingSystem, Processor, ComputerSystem, LogicalDisk, NetworkAdapter and NetworkAdapterConfiguration can be used to collect a good baseline set of information about the hardware specifications of target devices.

Anyway, let’s look at the script.

# Setup

$Domain1Cred=(Get-Credential -Message "Please enter local admin credentials for Domain 1" -Username "DOMAIN1\")
$Domain2Cred=(Get-Credential -Message "Please enter local admin credentials for Domain 2" -Username "DOMAIN2\")

[System.Collections.ArrayList]$machinelist=@()
$scandate=(Get-Date -Format "yyyyMMdd")

$dc=Read-Host("Enter the FQDN for a domain controller to query for DHCP servers")
foreach ($d in (Invoke-Command -Computer $dc -Credential (get-Credential) -Scriptblock {Get-DHCPServerInDC})) {
	Write-Host $d.DNSName -foregroundcolor white
	$DHCPname = ($d.DNSName -split "\.")[0]
	try {
		# Get list of active DHCP leases
		$scope=Get-DHCPServerv4Scope -Computername $d.DNSName -ErrorAction Stop
		$leases=Get-DhcpServerv4Lease -ScopeId $scope.ScopeID -ComputerName $d.DNSName -ErrorAction Stop | ? {$_.AddressState -like "Active" -and (($_.Hostname -match "DTW|LTW|TB"))} | Sort-Object -Property LeaseExpiryTime -Descending
		[boolean]$serveronline=$true
		Write-Host "Connected to DHCP Server $($d.DNSName), lease list obtained..." -foregroundcolor white
	} catch {
		[boolean]$serveronline=$false
		Write-Host "Unable to connect to DHCP server $($d.DNSName)" -foregroundcolor red
	}

First up, we’re getting credential tokens for the domains in use across the environment. I’ve kept this script to two domains and used if/else statements, but if additional domains come into play, it can be easily expanded by changing to switch statements for domain-specific information. We’re also setting up an ArrayList because normal arrays are inefficient at adding items as they increase in size, and I was looking to retrieve data on several thousand devices. The scandate variable is just for reference because I anticipated a situation where this might need to be run multiple times, and it would be useful to know when a system was last online. The dc variable is set like that to save me having to load an entire module for Active Directory just to get a domain controller name.

After that, we use Invoke-Command to run Get-DHCPServerInDC to get back a list of authorised DHCP servers on the DC’s domain, and iterate through that list. For each DHCP server, we start by getting the scope and leases, filtering for active leases (to remove inactive reservations) and for hostnames matching our device naming conventions (to avoid unwanted devices). The ability to use | and separate multiple terms when using match is a really neat way of keeping Where-Object clauses short and readable. Lastly, we set a boolean called serveronline.

	if ($serveronline) {
		# For each machine in $leases, check for most recent directory in C:\Users and match against $user
		foreach ($l in $leases) {
			$shortname=($l.Hostname -split "\.")[0]
			# Check shortname and select credentials for WMI query accordingly
			if ($l.Hostname -match "DOMAIN1") {
				$domain="DOMAIN1"
				$cred=$Domain1Cred
			} else {
				$domain="DOMAIN2"
				$cred=$Domain2Cred
			}		
			try {
				Test-Connection $shortname -ErrorAction Stop | Out-Null;
				[boolean]$online=$true
			} catch {
				Write-Host "Machine $($shortname) could not be reached." -foregroundcolor red;
				[boolean]$online=$false;
			}
			if ($online) {
				# $serialno=(Get-WMIObject -Computername $shortname -Credential $cred -Class Win32_BIOS).SerialNumber
				$hwinfo=(Get-WMIObject -Computername $shortname -Credential $cred -Class Win32_ComputerSystemProduct -Property vendor,name,IdentifyingNumber)
			}

			# Create object, add to machinelist arraylist
			$Obj=New-Object System.Object
			$Obj | Add-Member -MemberType NoteProperty -Name "DHCPServer" -value $DHCPname -Force		
			$Obj | Add-Member -MemberType NoteProperty -Name "Domain" -value $domain -Force		
			$Obj | Add-Member -MemberType NoteProperty -Name "Date checked" -value $scandate -Force
			$Obj | Add-Member -MemberType NoteProperty -Name "Hostname" -value $shortname -Force
			$Obj | Add-Member -MemberType NoteProperty -Name "Make" -value $($hwinfo.Vendor) -Force
			$Obj | Add-Member -MemberType NoteProperty -Name "Model" -value $($hwinfo.Name) -Force
			$Obj | Add-Member -MemberType NoteProperty -Name "SerialNumber" -value $($hwinfo.IdentifyingNumber) -Force
			$machinelist.Add($Obj)
			
			# Clean up variables
			Remove-Variable -Name shortname,cred,online,hwinfo,Obj -Force -ErrorAction SilentlyContinue
		}

If the server is online, we iterate through the list of active leases. For each lease, we check the hostname and use the DNS suffix to determine domain membership, and set the domain and cred parameters accordingly. We then check that the host is online with Test-Connection, and if so use Get-WMIObject to query the Win32_ComputerSystemProduct class for three values. These correspond to make, model and serial number – the commented-out line shows that I initially assumed I’d have to use the Win32_BIOS class for the serial number. Lastly, a new object is created and the relevant information is added to it across a set of properties. The object is then added to the ArrayList, and variables from the foreach-lease loop are cleaned up.

	} else {
		# Add entry to machinelist for DHCP Server to record that it is unreachable
		$Obj=New-Object System.Object
		$Obj | Add-Member -MemberType NoteProperty -Name "DHCPServer" -value $DHCPname -Force
		$Obj | Add-Member -MemberType NoteProperty -Name "Date checked" -value $scandate -Force
		$Obj | Add-Member -MemberType NoteProperty -Name "Hostname" -value "SERVER UNREACHABLE" -Force
	}
	# Clean up variables after processing all leases
	Remove-Variable -Name scope,leases,serveronline -Force -ErrorAction SilentlyContinue
}

If the server isn’t online, a placeholder object is created and added to the arraylist to note that the server was inaccessible. Lastly, variables from the foreach-dhcpserver loop are cleaned up.

Write-Host "Finished checking DHCP servers, generating output file...." -foregroundcolor white

$outputfile="C:\tmp\Scripts\Inventory\" + (Get-Date -Format "yyyy-MM-dd") + "_Inventory.csv"
$machinelist | Export-CSV -Path $outputfile -Encoding UTF8 -NoClobber -NoTypeInformation

Once all DHCP servers have been checked, all that’s left is exporting the arraylist as a CSV. (And in an ideal world, using that CSV file to populate a database-driven system…)

OWA problems and Exchange Canary Data

This post is about a short script I wrote a few years ago to solve a problem I encountered periodically with OWA and ECP access on an Exchange 2013 environment. The problem symptoms were unhelpful – a user attempting to log into OWA would get a message reading “Something went wrong”, and not a lot more. Not all users would experience the problem, but sometimes it would also occur when accessing ECP – and when it did happen, it would affect enough users to count as a major incident, so absent a proper fix for the root cause, a quick-to-deploy workaround would have to suffice.

After some digging and head-scratching, we discovered that this is a known issue on Exchange environments running on a very old CU version. The specific support article describing the issue (which I didn’t find until a good while later, because the error message is vague and does not provide a specific error code or provide an obvious Event ID in relation to it) is here. The fix listed there is “Install CU11 or later”, which is good advice in general but also means “perform a significant maintenance task on your Exchange infrastructure”, which is generally not a task to undertake without preparation to resolve a service outage. Exchange 2013 and 2016 both have a pretty messy case of .NET dependency hell, so it’s a task best approached with a clear plan and an approved outage window…

Absent that fix, there is a workaround for the issue. A good description of it can be found here, which explains the root cause as a mismatch between information held in a client cookie when accessing OWA or ECP and a corresponding value held in Active Directory. The steps given to resolve the issue are manual, however, and require using ADSIEdit to clear the value of 4 properties.

I started by writing a function to automate checking these properties and clear their value as necessary:

Function Check-ExchangeCanaryData {
	$obj=(Get-ADObject -filter 'ObjectClass -eq "MSExchContainer"' -Searchbase "<container for domain>,CN=Microsoft Exchange,CN=Services,CN=Configuration,<top of FQDN for domain>" -Properties msExchCanaryData0,msExchCanaryData1,msExchCanaryData2 | ? {$_.Name -match "Client Access"})
	if (($Obj.msExchCanaryData0 -eq $null) -and ($Obj.msExchCanaryData0 -eq $null) -and ($Obj.msExchCanaryData0 -eq $null)) {
		Write-Host "Exchange Canary Data looks good." -foregroundcolor green
	} else {
		Write-Host "One or more Exchange Canary Data values are non-null, this may indicate a problem with OWA or ECP access.`n$($Obj.msExchCanaryData0)`n$($Obj.msExchCanaryData1)`n$($Obj.msExchCanaryData2)`n" -foregroundcolor red
		try {
			if ($Obj.msExchCanaryData0 -ne $null) {
				$Obj.msExchCanaryData0=$null
			}
			if ($Obj.msExchCanaryData1 -ne $null) {
				$Obj.msExchCanaryData1=$null
			}
			if ($Obj.msExchCanaryData2 -ne $null) {
				$Obj.msExchCanaryData2=$null
			}
			Set-ADObject -Instance $obj -ErrorAction Stop
			Write-Host "Reset Exchange Canary Data successfully." -foregroundcolor green
			Remove-Variable -Name obj -Force -ErroraCtion SilentlyContinue
		} catch {
			Write-Host "Failed to reset Exchange Canary data, error was:`n$($_.Exception.Message)." -foregroundcolor red
		}			
	}	
}

Initially when writing this I was thinking of executing it directly on a domain controller, but as long as the Active Directory powershell RSAT module is installed on a local machine it could be run from anywhere, as the Get- and Set-Object cmdlets will automatically pick a domain controller to query.

The function itself uses Get-ADobject to retrieve the msExchCanaryData properties frm the Configuration container in AD, and check if any of them have non-null values. If they do, the values are forcibly reset to $null.

This next part I didn’t add until later, when I re-read this article and noticed this bit:

4.Open IIS Manager on your CAS server, go to 【Application Pools】, right click 【MSExchangeOWAAppPool】 and click Recycling

All the Exchange servers in our environment were configured as Client Access servers at the time, so I added a second part to the fix process:

$cred=Get-Credential
$exchsrvs=(get-adgroupMember  "Exchange Servers" -server  -Credential $cred  |?{$_.ObjectClass -eq "computer"}).Name
foreach ($exchsrv in $exchsrvs) {
	Write-Output $exchsrv
	$session=New-PSSession -Computername $exchsrv -Credential $cred
	Invoke-Command -Session $session -Scriptblock {
		ipmo WebAdministration;
		try {
			foreach ($pool in (gci "IIS:\AppPools" | ? {$_.Name -match "OWAApp|ECPApp"})) {
				Restart-WebAppPool $pool
			};	
			$pools=gci "IIS:\AppPools" | ? {$_.Name -match "OWAApp|ECPApp"}
			if (!($pools | ? {$_.State -notmatch "Started"})) {
				Write-Output "All Application Pools have started successfully."
			} else {
				Write-Output "All Application Pools have not started successfully."
			}
		} catch {
			Write-Output "An error occured while recycling the Application Pools, message was $($_.Exception.Message)"
		}
	}
	Remove-PSSession $session -Force -ErrorAction SilentlyContinue
}
Remove-variable -name cred -force

Again, pretty straightforward – prompt for credentials to use in a remote session, query AD for a list of Exchange Servers (rather than using a hard-coded list), then iterate through the list.

For each server in the list, the name is written to the console, then a remote session is created and a scriptblock is executed that does the following:

  1. import the WebAdministration module
  2. query IIS for App Pools with OWAApp or ECPApp in the name
  3. recycle/restart the relevant App Pools
  4. re-query IIS for the App Pools to verify that their status is “started”

At the end of those steps, the session is removed and the script loops to the next server.

As it turns out, the CU that fixes the issue does so by scheduling an automatic recycle of the affected Application Pools every 28 days. This can be done in IIS by any administrator, without having to install the CU or make any other changes to the system. Given the various other problems addressed by installing CUs, the better fix for the root cause remains to remain as close to the current release as is practical in your environment.

Cleaning up old files automatically (IIS logs, Temp files, etc)

This is a simple utility script I wrote a while ago, after several instances of finding IIS servers that had been configured to use the system drive for logging and were almost out of storage as logs accumulated. I thought it would be useful to have a generalised solution to the problem of working directories which fill up with old files, and particularly when things are busy it’s good to have something that consistently manages this so that servers require less babysitting. (Regular monitoring of storage capacity is also a good idea, of course!)

The script is intended to be called using a scheduled task, with the task action being to run powershell.exe and the parameters on the action being in the form “Remove-Oldfiles.ps1 -directorypath C:\Temp -maxagedays 30”.

The script itself is as follows:

param(
	[Parameter(Mandatory=$true)]  
	[String]$directorypath,
	[int]$maxagedays
)

# Define function
function Remove-OldFiles {
	param(
		[Parameter(Mandatory=$true)]  
		[String]$directorypath,
		[int]$maxagedays
	)
	# Verify that tmp directory exists, create if not
	if (!(Test-Path "C:\tmp")) {
		Mkdir "C:\tmp"
	}
	$logfile=$("C:\tmp\" + [string](get-Date -Format "yyyy-MM-dd") + "_Remove_OldFiles.log")
	[string](Get-Date -Format "yyyy-MM-dd HH:mm")+": Starting old file cleanup" | Out-File -filepath $logfile
	$cutoffdate=(get-Date).AddDays(-$maxagedays)
	if (Test-Path $directorypath) {
		$delcands+=GCI -Path $directorypath | ? {$_.LastWriteTime -lt $cutoffdate}
		[string](Get-Date -Format "yyyy-MM-dd HH:mm")+": Deletion candidates are as follows:" | Out-File -filepath $logfile -append
		$delcands | Out-File -filepath $logfile -append
		$delcands=$delcands | Sort-Object -Property Fullpath -Descending
		foreach ($file in $delcands) {
			try {
				Remove-Item -Path $file.FullName -Force -ErrorAction Stop
			} catch {
				"Unable to delete file $($file.Fullname). Reason was $($_.Exception.Message)" | Out-File -filepath $logfile -append
			}
		}
	} else {
		"Specified target directory $($directorypath) could not be found!" | Out-File -filepath $logfile -append
	}
	[string](Get-Date -Format "yyyy-MM-dd HH:mm")+": Old file cleanup complete." | Out-File -filepath $logfile -append
}

# Invoke function

Remove-OldFiles -directorypath $directorypath -maxagedays $maxagedays

I’ve written the code as a function and implemented the script as a parametrised definition and invocation of the function, because this means it’ll be easier to re-use elsewhere if I need it.

The named parameters are passed into the script and then into the function, which does the following:

  1. Check for the existence of “C:\tmp” and create it if not necessary.
  2. Create a logfile in C:\tmp named with the datestamp (in an alphabetically sortable format) and a suffix indicating the name of the script generating the log.
  3. Sets a cutoffdate based on the current timestamp and the specified maximum file age
  4. Tests for the existence of the target directory.
  5. If found, iterates through the target directory and populates an array with deletion candidates.
  6. Outputs the filenames for the deletion candidates to the logfile.
  7. Iterates through the deletion candidates, starting with the longest filenames first so that files within directories are deleted before the directory itself is deleted. (This may now be fixed, but has been a long-standing bug in Remove-Item for the entirety of the time I have been using PowerShell.)
  8. If a file cannot be deleted, this is noted in the logfile.

There’s nothing more that this script really needs to do. If I was particularly concerned about the success or failure of each run it would be easy enough to make it email our service desk mailbox and generate a ticket.

Bugfixing my podcast management script

Like most folk wroking in IT, the last few months at work have been a mad scramble to cope with unexpected changes, so I haven’t been doing a lot of scripting work.

I did recently have to fix a couple of bugs I’d found in my podcast management script which were interesting in that they boil down to an ongoing challenge with PowerShell – the variability of escape characters depending on what module or cmdlet you’re using.

The first problem I had was with invalid character detection. The original version of my code was very simple:

$invalidchars=@("/",":","*","?","<",">","|")
foreach ($char in $invalidchars) {
	if ($targetfile -match $("\$char")) {
		$targetfile=$targetfile -replace $("\$char"),"_"
	}
	if ($targetfile -match '"') {
		$targetfile=$targetfile -replace '"',"'"
	}
}

A simple array containing the forbidden characters I want to exclude from filenames, and a loop to check each that character is present in the target filename. If it is, it’s replaced with an underscore. And a separate special case check for quotes to replace them with single quotes.

These characters are forbidden because they won’t get parse correctly as part of a filename when using Rename-Item (for example). To ensure that the interpreter checks each character
as itself (rather than as its interpreted version), a backslash is used as to escape the character. And this is where the problem is introduced.

Not all of the characters on my list need to be escaped, but for the ones that do the escaping isn’t always the same. For example, the backslash character itself – when checking for the presence of a backslash character, you need to use “\\\\” as the check – two backslashes (which are converted to a single backslash by the interpreter), then [i]another[/i] two backslashes (which are converted to a second single backslash by the interpreter, now allowing the first backslash to be treated as as an escape character). Whereas the double-quote character ” needs to be escaped with a tilde, `. Attempting to escape it with a backslash will cause the interpreter to treat it as the start of a string which is not correctly terminated, and usually result in a large number of confusing errors.

I’ve run into this particular quirk before, but apparently forgot about it in this context.

In an effort to rationalise the code and make it easier to expand in future, I decided to replace the array with an ArrayList of invalid characters:

if ($PSScriptRoot) {
	[System.Collections.ArrayList]$invalidchars=Import-CSV -Path $($PSScriptRoot+"\invalidchars.csv")
} else {
	[System.Collections.ArrayList]$invalidchars=Import-CSV -Path $("<full path to invalidchars.csv>")
}

Only one line is needed for this, but the above approach means I can test changes for parts of this code without having to run as an entire script – PSScriptRoot only has a value when invoking a script, otherwise it returns as null.

The array itself contains the escaped version of the character to be checked for, and what replacement character to use:

foreach ($char in $invalidchars) {
	if ($char.Replace -eq "") {
		$replace="_"
	} else {
		$replace=$char.Replace
	}
	if ($targetfile -match $char.Escaped) {
		$targetfile=$targetfile -replace $($char.Escaped),$replace
	}
	Remove-Variable -name replace -Force -ErrorAction SilentlyContinue
}

If no replacement character is specified, the default within the script – an underscore – is used. If I decide to change this in future, I only need to change one occurence within the script rather than each relevant entry in the CSV file being imported. This method also allows the use of strings as well as single characters to be specified – for example, in order to maintain internal consistency within the script I wanted to make sure that when a colon is replaced, the replacement string starts with a space. This method also allows me to eliminate the separate check for quotation marks in the title.

The second issue is a logical issue at the end of the Add-Podcast function, which ultimately boils down to unwarranted assumptions on my part.

The problem is in this particular line of code, which sometimes returns an error about indexing into a null array:

$eps=($feed.rss.channel.item | ? {($_.Title -match $podcast.TitleFilter) -and ((($_.Title -split " ")[0] -as [int]) -is [int])})[0]

This does the following:
[ol][li] the $feed.rss.channel.item object contents are piped into a where-Object check,
[li]the object contents are filtered by the TitleFilter regex for the podcast,
[li]the object contents are filtered by the outcome of splitting each item’s Title by a space character, casting the first fragment as an integer, and checking if the fragment is a valid integer.[/ol]

The root cause is that the above code assumes that, for podcasts whose titles have an episode number in them, there will be a space after the episode number. If that is not the case, e.g. podcasts using “001:” as their numbering format, the split operation will be performed correctly, but the first fragment will be “001:” which cannot be cast as an integer. Hence there being no results to populate the array nor instantiate it with, and the error about indexing into a null array.

My current fix for this is to remove the filtering:

$eps=($feed.rss.channel.item | ? {$_.Title -match $podcast.TitleFilter})[0]

Which resolves the issue well enough for now. Strictly speaking, it allows for inclusion of episodes which don’t have a number at the start of the title (e.g. promotional episodes of different podcasts), but that can already be addressed using the TitleFilter functionality and a suitable regular expression.

Another option would be to amend the split command as below:

$eps=($feed.rss.channel.item | ? {($_.Title -match $podcast.TitleFilter) -and ((($_.Title -split " |:")[0] -as [int]) -is [int])})[0]

There are only 2 characters different here – the “|:” in the parameter passed to the split command. The reason this works is because the split command, like the match command, can accept multiple parameters separated by a pipe. So effectively this line will perform the split operation twice, once splitting on spaces and the other splitting on colons. Since only one of those two operations has any output, it doesn’t matter which order the split characters are specified in.

In the end I opted against using this approach because it solves a specific instance of the problem rather than the overall issue.