The 2013 Scripting Games, Beginner Event #3


Dr. Scripto has been fielding a lot of calls from the Help Desk lately. They’ve been asking him to look up information about the local hard drives in various servers—mainly size and free space information. He doesn’t mind helping, but all the requests have been getting in the way of his naps. He’s asked you to write a one-liner command that can get the information for him, and he wants the output in an HTML file. The HTML file should look something like this:



The Doctor says you don’t need to parameterize your command. It’s okay to write it to run against LocalHost, and he can just change that computer name as needed in the future. The resulting HTML does need to go into an HTML file on disk someplace, and he wants you to pay special attention to the following:

  • The browser displays “Disk Free Space Report” in the page tab when viewing the report.
  • “Local Fixed Disk Report” is in the Heading 2 (H2) HTML style.
  • The report ends with an HTML horizontal rule, and the date and time that the report was generated.
  • The size and free space values are shown as gigabytes (GB) and megabytes (MB) respectively, each to two decimal places.

The command you write can assume that both WMI and CIM are available on the remote computers, and that all the necessary firewall rules and authentication have already been taken care of.


Actual Entry (forgot to save the HTML)

Get-WmiObject -class Win32_Logicaldisk -computername "Localhost" -Filter "DriveType=3" | Select @{label="Drive";Expression={$_.DeviceID}},@{label="Size(GB)";Expression={"{0:N2}" -f($_.Size / 1GB)}},@{label="Size(MB)";Expression={"{0:N2}" -f($_.FreeSpace / 1MB)}} | ConvertTo-Html -head "

Local Fixed Disk Report

" -PostContent ("

" + (get-Date))

Should have been (used body to hold title and output the file)

Get-WmiObject -class Win32_Logicaldisk -computername "Localhost" -Filter "DriveType=3" | Select @{label="Drive";Expression={$_.DeviceID}},@{label="Size(GB)";Expression={"{0:N2}" -f($_.Size / 1GB)}},@{label="Size(MB)";Expression={"{0:N2}" -f($_.FreeSpace / 1MB)}} | ConvertTo-Html -body "

Local Fixed Disk Report

" -PostContent ("

" + (get-Date))| Out-File -FilePath $PWD\DiskReport.html


Learning Points

Hard Coded Paths (Boe Prox)

Hard path’s like C:\test\ won’t work for everyone, a better solution would be to use one of the built in PowerShell path variables like:

  • $pwd (present working directory)
  • $Env:TEMP  (OS temp directory)
  • $Env:USERPROFILE. (Current user root profile)
  • Among others

ConvertTo-Html Tricks (Bartek Bielawski)

The -preContent and -PostContent parameters can take string arrays, e.g.:

  • ConvertTo-Html (...) -PostContent '

    ', (Get-Date)

You can also submit a hashtable for the -Property parameter, thus bypassing the need to use Select-Object if needed, e.g. :

Get-Process | ConvertTo-Html -Property @{                Label = 'Process Name'                Expression = { $_.Name }            }, @{                Label = 'Process Id'                Expression = { $_.Id }            }, @{                Label = 'Path to executable'                Expression = { $_.Path }            } | Out-File Processes.html

CIM vs WMI (Jan Egil Ring and Ann Hershel)

  • While we should be moving towards CIM due to its use of both DCOM and WinRM. Granted they both can’t be used in the same statement and need to specified separately since CIM uses WinRM by default
  • if any W2K systems were present they would only work with DCOM, and most modern systems that are set up securely would block DCOM access

Advanced Notes (Bartek Bielawski)

  • I know this particualr notion appeard with a different blogger but it’s good enough to mention twice
  • Use splatting if you plan to use the same info over and over again in a script, in fact it’s not a bad idea to do it all the time
    • Instead of 

Send-MailMessage -To -From -Subject Test -SMTPServer -Body “This is a test”

    • Do
$email = @{ To = '' From = '' Subject = 'Test' SMTPServer = '' Send-MailMessage @email
  • Uses aliases for parameter that might have multiple input names. E.g. -ComputerName could also have __Server or Name to compensate for different inputs on different PS versions

Advanced Notes (Ann Hershel)

  • You can created parameters sets such that when a parameter that is not required can become required when another parameter is specified
  • This next portion appeared in a previous blog entry as well but once again it’s good info and this had a more fleshed out example
  • When you declare a parameter as ValueFromPipeline you need to use a process block in order for the script/function to work with each pipelined in object.
function Get-DiskInfo { param( [Parameter(ValueFromPipeline=$true)] [String[]] $ComputerName ) process { "Processing $ComputerName" } }
  • So this would work
'server1', 'server2' | Get-DiskInfo
  • But a string array won’t, as it will process the both names at the same time
Get-DiskInfo -ComputerName 'server1', 'server2'
  • In order to get this to work you need to have the process block unroll the content of $ComputerName and process each element separately.
function Get-DiskInfo { param( [Parameter(ValueFromPipeline=$true)] [String[]] $ComputerName ) process { $ComputerName | Foreach-Object { "Processing $_" } } }
Posted in PowerShell, Scripting Games | Leave a comment

The 2013 Scripting Games, Beginner Event #2


Dr. Scripto finally has the budget to buy a few new virtualization host servers, but he needs to make some room in the data center to accommodate them. He thinks it makes sense to get rid of his lowest-powered old servers first… but he needs to figure out which ones those are.All of the virtualization hosts run Windows Server, but some of them don’t haveWindows PowerShell installed, and they’re all running different OS versions. Theo ldest OS version is Windows 2000 Server (he knows, and he’s embarrassed, but he’s just been so darn busy). The good news is that they all belong to the same domain, and that you can rely on having a Domain Admin account to work with. The good Doctor has asked you to write a PowerShell command or script that can show him each server’s name, installed version of Windows, amount of installed physical memory, and number of installed processors. For processors, he’ll be happy getting a count of cores, or sockets, or even both — whatever you can reliably provide across all these different versions of Windows. To help you out, he’s given you a text file, C:\IPList.txt, that contains one server IP address per line. If you can write this as a one-liner — awesome! If not, try to keep your answer is concise and compact as possible (although it’s perfectly okay to use full command and parameter names).

My Answer

Get-Content C:\IPList.txt | 
 Foreach-object {
  Get-WmiObject -Namespace root\CImv2 -Class Win32_ComputerSystem -ComputerName $_ | 
   Select-object Name, 
                 @{Label="OS";Expression={(Get-WmiObject -Namespace root\CImv2 -Class Win32_OperatingSystem).Caption}}, 
  } | 
   Format-Table -AutoSize

To make PS_2 compatible use __Servername instead of Name

My answer, but rounding up the size calculations

Using the format ( f{}) commands

Get-Content C:\IPList.txt | 
 Foreach-object {Get-WmiObject -Namespace root\CImv2 -Class Win32_ComputerSystem -ComputerName $_ | 
 Select-object Name, 
 @{Label="OS";Expression={(Get-WmiObject -Namespace root\CImv2 -Class Win32_OperatingSystem -ComputerName $_.Name).Caption}}, @{Label="Mem in GB";Expression={"{0:N0}" -f($_.TotalPhysicalMemory / 1GB)}}, 
 NumberOfLogicalProcessors} | 
Format-Table –AutoSize

Using the .Net Math class

Get-Content C:\IPList.txt | 
 Foreach-object {Get-WmiObject -Namespace root\CImv2 -Class Win32_ComputerSystem -ComputerName $_ | 
 Select-object Name, 
 @{Label="OS";Expression={(Get-WmiObject -Namespace root\CImv2 -Class Win32_OperatingSystem -ComputerName $_.Name).Caption}}, @{Label="MEM in GB";expression={[System.Math]::Round(($_.TotalPhysicalMemory / 1GB),1)}}, 
 NumberOfLogicalProcessors} | 
 Format-Table –AutoSize

Learning Points

Splatting (Boe Prox)

Use splatting if you plan to use the same info over and over again in a script, in fact it’s not a bad idea to do it all the time

  • Instead of
Send-MailMessage -To -From -Subject Test -SMTPServer -Body "This is a test"
  • Do
$email = @{
 To = ''
 From = ''
 Subject = 'Test'
 SMTPServer = ''
Send-MailMessage @email

Stop using $ErrorActionPreference (Boe Prox)

This changes it for the entire script, use it as needed for each command using -ErrorAction

Using Passthru to combine different objects (taygibb‘s entry)

The PassThru parameter is one of the common parameters of most PowerShell cmdlets and returns an object when a cmdlet normally wouldn’t return an object. For example, The Copy-Item cmdlet will perform an action but not return any information on the copied object but when specifying the Passthru parameter you can return the copied object and perform further work on it. In TayGibbs script he uses this technique to add a new member to the management object returned from query the 1st set of required WMI information. Normally Add-Member wouldn’t return information to the console. But when using the Passthru cmdlet it will  return the new changed object

Get-WmiObject -Class "Win32_ComputerSystem" -ComputerName localhost | 
 Select-Object -Property Name,
                         NumberOfProcessors | 
                            Add-Member -Name "OS Version" -Value $(Get-WmiObject -Class Win32_OperatingSystem -ComputerName localhost | 
                                                                     Select-Object -ExpandProperty Caption) -MemberType NoteProperty -PassThru

Advanced Notes (Art Beane)

  • If your entry calls for array entry in the parameter then make sure to add it, e.g.
    • [string[]]$ComputerName
  • Match your parameter names to the properties of the expected item (e.g. ComputerName and not Server)
  • When working in V3 with CIM why not Try{WSMAN} Catch{DCOM}

Misc Advanced Notes

  • Uses #Requires -version X to show the version your script works with
  • Be careful with shorthand parameter declarations, for examle
    • This works in v2 and v3
      • [Parameter(Position=0,ValueFromPipeline=$True,Mandatory=$True)]
    • This only works in V3
      • [Parameter(Position=0,ValueFromPipeline,Mandatory)]



Posted in Uncategorized | Leave a comment

The 2013 Scripting Games, Beginner Event #1 follow-up

It’s been a while since I blogged and i have the skeletons of multiple blogs post waiting to be edited. But now that I’ve settled into my new job and the scripting games have died down I can finally start posting my notes and learning points from the rest of the scripting games events. So here is the 1st in a series of follow-ups on the scripting games. Each entry the following:

  • The original question
  • My submitted answer
  • A revised answer after reviewing other entries and judges notes
  • A summary of learning points I took away from the event

Question 1

Dr. Scripto is in a tizzy! It seems that someone has allowed a series of application log
files to pile up for around two years, and they’re starting to put the pinch on free
disk space on a server. Your job is to help get the old files off to a new location.
The log files are located in C:\Application\Log. There are three applications that
write logs here, and each uses its own subfolder. For example,
C:\Application\Log\App 1, C:\Application\Log\OtherApp, and
C :\Application\Log\ThisAppAlso. Within those subfolders, the filenames are
random GUIDs with a .LOG filename extension. Once created on disk, the files are
never touched again by the applications.
Your goal is to grab all of the files older than 90 days and move them to
\\NASServer\Archives. You need to maintain the subfolder structure, so that files
from C:\Application\Log\Appl get moved to \\NASServer\Archives\Appi, and so
You want to ensure that any errors that happen during the move are clearly
displayed to whoever is running your command. You also want your command to be
as concise as possible — Dr. Scripto says a one-liner would be awesome, if you can
pull it off, but it’s not mandatory that your command be that concise. It’s also okay to
use full command and parameter names. If no errors occur, your commaid doesn’t
need to display any output — “no news is good news.”

My Answer

Get-ChildItem -Path "C:\Application\Log" -Recurse -Filter *.log | Where-object {$_.CreationTime -le (get-date).AddDays(-90)} | Select Name,Directory,FullName  | ForEach-Object {Move-Item $_.FullName -Destination ("\\NASServer\Archives\"+($_.Directory.Name)+"\"+$_.Name)}

After reviewing other entries my revised entry is

Get-ChildItem -Path "C:\Application\Log" -Recurse -Filter "*.log" | Where-object {$_.CreationTime -le (get-date).AddDays(-90)} | ForEach-Object {Move-Item $_.FullName -Destination ("\\NASServer\Archives\"+$_.Directory.Name)}


  • Redundant select statement, for some reason when I 1st did the script I thought this was the only way to get the Directory Name info
  • Didn’t need the extra $_.Name portion since I’m coming to a directory not an actual file
  • I didn’t incase *.log in quotations so it will find *.log123 etc. in addition to *.log

Learning Points

Think before using -recurse (Ann Hershel)

If you are working with a deep folder structure you can use wildcards to specify a at just a few levels, e.g.

Get-ChildItem -Path C:\Application\Log\*\*.log

Foreach() versus ForEach-Object (Ann Hershel)

foreach() is faster but it requires the data to be collected in memory first, so large sets can chew up a lot of memory. If the collection gathering fails then the whole command fails. Foreach-Object processes objects as they appear, so it uses less memory and is potentially better for large data sets

Create directories while moving them

You can make a check for the destination folders and create them using New-item If they weren’t present. Using New-Item alone will create the destination files and directories but will not remove them from the source. But you can get around this with a simple if statement

if (-not(Test-Path $ArchiveDirectory)) {New-Item $ArchiveDirectory -ItemType Directory | Out-Null} Move-Item $file.FullName $ArchiveDirectory }

Honor Verb Hyphen Noun when creating functions (Bartek Bielawski)

Use approved verbs, you should rarely deviate from this. And your nouns should not be plural.

Don’t use Boolean if you don’t need to (Bartek Bielawski)

You can test for a value by using if ($Value) and the opposite by if (-not $Value)

Don’t repeat calculations in your pipeline (June Blender)

For example the following will do the math for Get-Date for every file passed down the pipeline

Get-ChildItem C:\Application\Log\*\*.log | 
 Where-Object {$_.LastWriteTime -lt (Get-Date).AddDays(-90)} | 
 Move-Item -Destination ...

It would be better do it once, save it as a variable then and re-use it

$ArchiveDate = (Get-Date).AddDays(-90)
Get-ChildItem C:\Application\Log\*\*.log | 
 Where-Object {$_.LastWriteTime -lt $ArchiveDate} | 
 Move-Item -Destination ...

Another issue with doing the calculation every time an object is passed down the pipeline is that the date is changing each time. What if you ran this command at 11:59PM? The files that hit the pipeline at 12:00AM are being compared using a different date.

Validate your parameters before running the script (Glenn Sizemore)


Param ( 
    [Parameter(Mandatory=$true, ValuefrompipelineByPropertyName=$true)]
  [ValidateScript({Test-Path $_ -PathType Container})] 

 [Parameter(Mandatory=$true, ValuefrompipelineByPropertyName=$true)]
    [ValidateScript({Test-Path $_ -PathType Container})] 

Don’t make you Parameter names all start with the same letter (Glenn Sizemore)

It makes tab completion harder than it needs to be!

How to correctly use Pipeline input for your functions (Boe Prox)

If you have a parameter that accepts pipeline input then you need to have the work performed on the pipeline input in a process block, otherwise it will only execute on the last object piped in. For example, take this function:

Function Test-Something { 
 Param ( 
 ForEach ($Computer in $Computername) { 

If the you try the following:

1,2,3,4,5 | Test-Something

You will only get the following back


If you add a process block, like so:

Function Test-Something { 
 Param ( 
 Process {
  ForEach ($Computer in $Computername) { 

And rerun the command you will get each item output instead of the last item in the pipeline.

So over all use the Begin{} to initialize anything that you need to run only once prior to the Process{} block (e.g. make a SQL connection). The Process{} block will then execute for each piped in object. Finally the End{} block will execute any one time work that needs to be done after working with the piped in results (e.g. closing a SQL connection)

Posted in PowerShell, Scripting Games, Uncategorized | Leave a comment

The 2013 Scripting Games, Beginner Event #1

Having participated in the 2013 Winter Scripting Camp I was excited to try out my scripting skills again in the 2013 Scripting Games. While my scripting skills have advanced a lot of over the past 6 months since I co-founded the Philadelphia PowerShell User Group (PhillyPoSH), I still feel that I’m not cut out for the advanced events just yet. Especially considering the Beginner events of Winter Scripting Camp took me much longer than I expected. For those 2 events I struggled to get the answer down to one or two lines. While I believed that my scripting skills have evolved, the beginner events left me feeling as if I hadn’t advanced that much. For a quick history of the games and my submissions for the Winter Scripting Camp check the presentation I gave at PhillyPosh in March of this year.

So the 1st event for both the advanced and beginner track of the Scripting Games is over and the voting phase for the 1st event is under way. As a recap the question for the beginner track was

The log files are located in C:\Application\Log. There are three applications that write logs here, and each uses its own subfolder. For example, C:\Application\Log\App1, C:\Application\Log\OtherApp, and C:\Application\Log\ThisAppAlso. Within those subfolders, the filenames are random GUIDs with a .log filename extension. After they are created on disk, the files are never touched again by the applications.

Your goal is to grab all of the files older than 90 days and move them to \\NASServer\Archives. You need to maintain the subfolder structure, so that files from C:\Application\Log\App1 get moved to \\NASServer\Archives\App1, and so forth.

You want to ensure that any errors that happen during the move are clearly displayed to whoever is running your command. You also want your command to be as concise as possible — Dr. Scripto says a one-liner would be awesome, if you can pull it off, but it’s not mandatory that your command be that concise. It’s also okay to use full command and parameter names. If no errors occur, your command doesn’t need to display any output — “no news is good news.”

After trying a few attempts I originally came up with

Get-ChildItem -Path "C:\Application\Log" -Recurse -Filter *.log | Where-object {$_.CreationTime -le (get-date).AddDays(-90)} | Select Name,Directory,FullName  | ForEach-Object {Move-Item $_.FullName -Destination ("\\NASServer\Archives\"+($_.Directory.Name)+"\"+$_.Name)}

At the time, I kept having issues getting $_.Directory.Name to work. When I referenced it I would get an error that the property Directory didn’t exist. Using Select-Object fixed it for me, but upon further review I noticed that it worked without Select-Object. So I guess I probably had a misspelling somewhere when I originally wrote out the command.

I also thought that  I need to include the full file name path when copying to the destination, which obviously isn’t the case. My command would essentially say : copy C:\Application\Logs\SomeOtherApp\Test.log to \\NASserver\Archive\SomeOtherApp\Test.log
when it should have just dropped the file name mention in the destination and used \\NASserver\Archive\SomeOtherApp\ instead.

I also didn’t realize my query for *.log would return files with extensions like *.log123 etc. Encasing the filter in double quotations will ensure that I only look for *.log files

So after reviewing and voting on some entries I realized I could have shorten (and corrected) my command to

Get-ChildItem -Path "C:\Application\Log" -Recurse -Filter "*.log" | Where-object {$_.CreationTime -le (get-date).AddDays(-90)} | ForEach-Object {Move-Item $_.FullName -Destination ("\\NASServer\Archives\"+$_.Directory.Name)}

One of the cool parts about this year’s Scripting Games is that every can vote and comment on everyone else’s scripts. So far I’ve seen the following interesting things:

  • Using a wildcard in the path name instead of using the –Filter parameters
    • Get-ChildItem -Path C:\Application\Logs\*\*.Log -recurse
  • Even though the question didn’t ask for it some contestants made a check for the destination folders and created them (using New-item) If they weren’t present, as an example in a foreach loop. Using New-Item alone will create the destination files and directories but will not remove them from the source.
    • if (-not(Test-Path $ArchiveDirectory)) {New-Item $ArchiveDirectory -ItemType Directory | Out-Null} Move-Item $file.FullName $ArchiveDirectory }

Blooger Bartek Bielawski posted his answers to both the beginner and the advanced track and I found his use of script block in the beginner event to be very interesting.

Get-ChildItem -Path C:\Application\Log\*\*.Log | Where-Object {$_.LastWriteTime -lt (Get-Date).AddDays(-90)} | Move-Item -Destination {"\\NASServer\Archives\{0}\" -f $_.Directory.Name}

I still plan to vote on a few of the beginner entries this week until the next event opens. As I come across more interesting answers I’ll update this post

Posted in PowerShell, Scripting Games | Tagged as: | Leave a comment

PowerShell script to find all Active Sync devices on an Exchange 2010 server that haven’t synced in a specified amount of time

I published my 1st Powershell script to Technet’s Script repository and You can grab it here and below are the details of the script straight from the help file:

Get-InactiveActiveSyncDevices pulls all user mailboxes with an Active Sync partnership and then selects the Active Sync devices that haven't synced in the specified period. These devices are sorted in ascending order and placed in a HTML table which is emailed to the specified user using a specified reply to address and SMTP server
The script 1st checks to see if Implicit remoting is needed to load the required PSsnapin (Microsoft.Exchange.Management.PowerShell.E2010), this is done by seeing it the $ExchangeConnectionUri variable does not have a $NULL value. If it contains a URI then create a new PSsession using the current credentials and import the session. If Implict remoting isn't needed then verify that the required PSsnapin (Microsoft.Exchange.Management.PowerShell.E2010) is loaded and if not try to load it locally. Then  Get-InactiveActiveSyncDevices uses Get-CasMailbox to pull all the user mailboxes (Not Discovery or CAS mailboxes) with Active Sync device partnerships and saves them to a variable called $ActiveSyncMailboxes. It then walks through each mailbox and uses Get-ActiveSyncDeviceStatistics to pull DeviceType, DeviceUserAgent, DeviceID, LastSyncAttemptTime, LastSuccessSync for each mailbox’s separate Active Sync device partnership and puts these properties in addition to the full name associated with the mailbox into a hashtable called $UserActiveSyncStats. The reason why Get- ActiveSyncDeviceStatistics isn’t used exclusively is because it does not have a property that stores just the name of the user who owns the Active Sync device, only a full Active Directory path to the Active Sync device. This hash table is used to create a custom PowerShell Object which is then added to $ActiveSyncDeviceList. A variable called $MatchingActiveSyncDevices holds all the Active Sync devices in $ActiveSyncDeviceList that haven’t synced to the Exchanger server in less than or equal to the number of hours specified in $HourInterval . $MatchingActiveSyncDevices is then checked to see if it’s an empty array or not. If it contains items an HTML header is created to format the table for the HTML email report and saved in a variable called $HTMLHeader . Then The Body of the email contains all of the criteria matching Active Sync devices from $MatchingActiveSyncDevices in ascending order which is converted to HTML using the HTML header information created earlier in $HTMLHeader. Otherwise a the body contains a message stating that no devices matched the given criteria.. An Email is then sent out to the user specified in $to from the address specified in $from using the email server specified in $SmtpServer. The body is generated from the sorted Active Synce Devices in $ActiveSyncDeviceList.


Posted in Exchange 2010, PowerShell | Leave a comment

Multiple Exchange accounts in Outlook 2010 and 2013

We have added a lot of new email domains at my company recently and one most common request from our users is to work with multiple Exchange mailboxes within Outlook 2010. This would normally be accomplished by giving the user full access to whatever additional mailbox they need access to. But this method had some drawbacks for our users, mainly:

  • The user has to manually select the outgoing  email address of another mailbox they had full access to if they want to send as that account
    • The preferred behavior is that when they select the mail box from the Outlook tree and create a new email it will default to the outgoing address for that mailbox. This is not the behavior when granted full access to a mailbox, instead the outgoing email will always default to the primary Exchange account.
  • If the user requires separate signatures for each additional mailbox they have to manually specify them since the signature used will be the one for the main Exchange account. While you can create multiple signatures you can only attach one signature each for new email and reply to the main Exchange account and not the additional mailboxes the user was granted access to.

These might not seem like huge issues, but when these additional mailboxes are treated as if they came from separate companies it turn into a huge issue when a user accidentally sends an email intended for a client working with Company A with the outgoing email and signature from Company B. So in searching for a solution I came across this Microsoft Article detailing how to carry out adding multiple Exchange accounts to an Outlook profile. The problem is that it requires volume licenses of Office 2010 and a fairly complicated process to a get the additional accounts added. All the office licenses at my company are Medialess License Kits (MLK) so that ruled out this method. Originally it appears that the ability to add multiple accounts existed in the preview of Outlook 2010, but the method describe in the article no longer worked. And since there was an article detailing the process for retail release I assumed this feature was dropped. When the Office 2013 preview came out one of the 1st things I did was test its ability to add multiple accounts and surprisingly it worked! All I need to do was:

  1. After adding the first account in Outlook 2013 go to the File Tab.
  2. Under Info click on the +Add Account button
  3. From there fill in the info for the 2nd Exchange Account and it will be added to your Account Tree

This method allowed me to address the issues my users were having with managing additional mailboxes and didn’t require the user to have full access granted to the mailbox. Wondering if this worked in Outlook 2010 as well, I tried it and delighted it worked despite the documentation from Microsoft detailing a much more involved process to do so. I have since tested it up to 3 different Exchange accounts on the same Exchange server with no issues. I’m not sure what the limit is but I assume it might be 3 accounts per Outlook profile, like originally specified in the preview build of Outlook 2010.  


Posted in Exchange 2010, Outlook | Leave a comment

Make sure your Exchange 2003 server is fully removed before raising your AD Domain Functional Level

Though we completed our Exchange 2010 migration over a year ago we still kept our Exchange 2003 server around due to some hard-coded references in internally developed apps. Once those were taken care of we shutdown our Exchange 2003 server for a few months to make sure nothing else was still referencing it. After 6 months we decided it was safe to remove it from the domain and we used the following TechNet article as a guide. Now that Exchange 2003 was gone and after upgrading some other services that were dependent on a 2003 Native Domain Functional level we decided raise the level to 2008 R2. But immediately after doing so our Exchange 2010 server started locking up every 24-36 hours. We tried restarting the Exchange services during the lock up but only a reboot would fix the issue.

When reviewing the event log on the Exchange 2010 server it appears that it could not reach either of our domain controllers (DC01 and DC02), both of which held the Global Catalog server role. After multiple attempts to reach a DC or a GC the Exchange services would shut down causing all client protocols to go down with it. The most prevalent event log entry was as follows:

Log Name:      Application
Source:        MSExchange Autodiscover
Date:          8/2/2012 6:27:01 PM
Event ID:      1
Task Category: Web
Level:         Error
Keywords:      Classic
User:          N/A
Unhandled Exception "Could not find any available Global Catalog in forest"

Other events that appeared related to the problem as are as follows:

Log Name:      System
Source:        Microsoft-Windows-WAS
Date:          8/2/2012 6:18:26 PM
Event ID:      5011
Task Category: None
Level:         Warning
Keywords:      Classic
User:          N/A
A process serving application pool 'MSExchangeSyncAppPool' suffered a fatal communication error with the Windows Process Activation Service. The process id was '4976'. The data field contains the error number.
Log Name:      Application
Source:        MSExchange ADAccess
Date:          8/2/2012 6:29:03 PM
Event ID:      2103
Task Category: Topology
Level:         Error
Keywords:      Classic
User:          N/A
Process MAD.EXE (PID=5820). All Global Catalog Servers in forest DC=company,DC=net are not responding:
Log Name:      Application
Source:        MSExchangeTransport
Date:          8/2/2012 1:13:32 PM
Event ID:      5020
Task Category: Routing
Level:         Warning
Keywords:      Classic
User:          N/A
The topology doesn’t contain a route to Exchange 2000 server or Exchange 2003 in Routing Group CN=First Routing Group,CN=Routing Groups,CN=First Administrative Group,CN=Administrative Groups,CN=Email,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=company,DC=net in routing tables with the timestamp 8/2/2012 5:13:32 PM
Log Name:      Application
Source:        MSExchangeTransport
Date:          8/2/2012 1:13:32 PM
Event ID:      5006
Task Category: Routing
Level:         Warning
Keywords:      Classic
User:          N/A
Cannot find route to Mailbox Server CN=EX2003,CN=Servers,CN=First Administrative Group,CN=Administrative Groups,CN=Email,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=company,DC=net CN=Test,CN=First Storage Group (EX2003),CN=InformationStore,CN=EX2003,CN=Servers,CN=First Administrative Group,CN=Administrative Groups,CN=Email,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=company,DC=net in routing tables with timestamp 8/2/2012 5:13:32 PM. Recipients will not be routed to this store
Log Name:      Application
Source:        MSExchangeApplicationLog
Date:          8/1/2012 1:10:42 PM
Event ID:      9106
Task Category: ServicePicker
Level:         Error
Keywords:      Classic
User:          N/A
Service MSExchangeMailSubmission. Exchange topology discovery encountered an exception. Microsoft.Exchange.Data.Directory.ADTransientException: Could not find any available Domain Controller
Log Name:      Application
Source:        MSExchange ADAccess
Date:          8/1/2012 12:59:24 PM
Event ID:      2501
Task Category: General
Level:         Error
Keywords:      Classic
User:          N/A
Process MSEXCHANGEADTOPOLOGY (PID=1520). The site monitor API was unable to verify the site name for this Exchange computer – Call=HrSearch Error code=80040934. Make sure that Exchange server is correctly registered on the DNS server

At first glance the issues appeared AD related, but there were no obvious issues with the domain controllers or any other services relying on AD. The only recent major AD change was raising the domain functional level. Knowing that Exchange 2003 and back cannot function when the domain functional level is higher than 2003 Native, the entries referencing our decommissioned Exchange 2003 server (EX2003) seemed to be the root cause. In searching the web for answers we stumbled across a TechNet article that lead us to the references to EX2003 in Active Directory Sites and Services under Services -> Email -> Administrative Groups –> First Administrative Group. Which needed to be deleted:

  • \Email\Administrative Groups\First Administrative Group\Servers\
  • \Email\Administrative Groups\First Administrative Group\Routing Groups\First Routing Group\
  • \Email\Administrative Groups\First Administrative Group\Folder Hierarchies\Public Folders\

The actual removal was done in ADSI edit, which correlated to the following sections:

  • CN=Configuration,DC=company,DC=NET\CN=Services\CN=Microsoft Exchange\CN=Email\CN=Administrative Groups\CN=First Administrative Group\Cn=Servers\
  • CN=Configuration,DC=company,DC=NET\CN=Services\CN=Microsoft Exchange\CN=Email\CN=Administrative Groups\ CN=First Administrative Group\CN=Routing Groups\CN=First Routing Group\
  • CN=Configuration,DC=company,DC=NET\CN=Services\CN=Microsoft Exchange\CN=Email\CN=Administrative Groups\CN=First Administrative Group\CN=Folder Hierarchies\Public Folders\

Once removed the issue ceased!

When I discussed the issue at the Philly Exchange User Group, resident Exchange Master Bhargav Shukla mentioned that if I had used the Exchange 2010 Deployment assistant it would have had a section detailing how to properly remove Exchange 2003 in my environment.


Posted in Active Directory, Exchange 2003, Exchange 2010, W2K8R2 | Leave a comment

How to change and then resume a failed New-MailboxImportRequest In Exchange 2010

Recently I had to import a handful of PST files into a our Exchange 2010 server using the New-MailboxImportRequest cmdlet, and did so without setting the -baditemlimit parameter. During the import one of the files threw an error. Despite the discoverability of PowerShell, I couldn’t quickly figure out how to restart the import with a newly specified bad item limit. Obviously I didn’t want restart the import from beginning with a bad item limit specified, since that would create duplicate items at the destination mailbox.  And the Resume-MailboxImportRequest cmdlet did not allow you to change the settings of a failed import.  After some searching I came across a TechNet article showing how to do so, but since the information was hard to find via a Google/Bing search I’d figure I’d summarize it.

Once you have a failed mailbox import (the same holds true for exports) you can change the original request by piping it from Get-MailboximportRequest. For example, to set a bad Item limit to 50 on all failed requests.

Get-MailboxImportRequest -Status Failed | Set-MailboxImportRequest -BadItemLimit 50

To hit a specific request you can refer to it by name. By default, import requests are named <alias>\MailboxImportX (where X = 0–9). You could have specified a name for the import beforehand and use that to reference the failed mailbox in question, but I didn’t and this was the only failed mailbox. But if needed to I could have used :

Get-MailboxImportRequest -Identity USERALIAS\MailboxImportX | Set-MailboxImportRequest -BadItemLimit 50

If you can’t find the Identity you could always pipe all the failed requests into a formatted table and filter for Identity and Name property to make it easier to find the failed import in question:

Get-MailboxImportRequest -Status Failed | FT name,identity

Paul Cunningham has an interesting technique to get the Import identity over at his excellent Exchange blog : . Basically you pull the user name using the get-user cmdlet, saving it as a variable, and then passing it to Get-MailboxExportRequest’s -identity parameter but with the .dn suffix. For example:

$User = get-user Jdoe
Get-MailboxExportRequest -identity $User.dn

So back to the rectifying the failed import: now that you have changed the request and set a larger bad item limit you can then resume all failed requests by:

Get-MailboxImportRequest -Status Failed | Resume-MailboxImportRequest

Or a particular failed mailbox import by:

Get-MailboxImportRequest -Identity USERALIAS\MailboxImportX | Resume-MailboxImportRequest

So what if you wanted to know what issue was causing the failure?  You can do so by using the Get-MailboxImportRequestStatistics with the -IncludeReport parameter . You’ll want to output the report to a text file since it will contain a lot of info that will be much easier to search in a text file as opposed to the console screen. Building off my previous example the command would be:

Get-MailboxImportRequest -Identity USERALIAS\MailboxImportX | Get-MailboxImportRequestStatistics -IncludeReport | FL > C:\FILEPATH\report.txt

You can review the exported text file for exact email(s) that caused the import to fail.

Posted in Exchange, Exchange 2003, Exchange 2010, PowerShell | Tagged as: , , , | Leave a comment

Exchange 2010 mailboxes inherit IMAP folder views and attributes if imported from a PST dump of an IMAP account

I recently had to migrate 9 IMAP accounts from GoDaddy to our Exchange 2010 server. Since GoDaddy does not offer export services and it was only 9 accounts we decided to use a handful of Outlook profiles to connect to the GoDaddy IMAP accounts and pull down a full copy of the mailboxes. The exact process we used is as follows:

  1. Use Outlook to connect to the IMAP accounts in question and pull down a full copy of the mailbox. You may run into issues accessing multiple large IMAP accounts from 1 Outlook profile. In my experience 5-10 accounts per profile should be ok.
    1. By default Outlook will only pull down the headers of IMAP messages, so you will need to instruct Outlook to do a full sync
    2. This is done under the Send/Receive Ribbon Tab -> Send & Receive Section -> Send/Receive Groups -> Define Send/Receive Groups…
    1. In the Send/Receive Groups window, highlight All Accounts and <click> the Edit…button
    1. Under the Accounts section make sure to highlight the IMAP account in question, and in the section labeled Account Options <select> one folder’s check box. Then <select> the radio button for Download complete item including attachments. Finally <left click> on the same folder and you should see an option to select and apply the same item to all folders with in this IMAP account. Repeat for each IMAP account
    1. Now perform a send/receive and wait for the all the messages to come down
  1. Since Outlook creates a PST file for each synced IMAP account (C:\Users\ACCOUNT\AppData\Local\Microsoft\Outlook) they can be used to directly import the mailbox into an Exchange account
  2. On my Exchange server I created 3 empty accounts (the reaming 6 became distribution groups) and used the following PowerShell command to import each PST into the corresponding empty exchange account
    1. New-MailboxImportRequest –Mailbox USERNAME –FilePath \\NETWORK\PATH\OF\IMAP.PST
    1. You also create a script to pull them all in at once, here’s an example of one way to do it if the PST file names match the user name
      1. Dir \\NETWORK\PATH\OF\*.PST | %{New-MailboxImportRequest –Name ImportOfPst –BatchName ImportPstFiles –Mailbox $_.BaseName –FilePath $_.FullName}

In our case these imported mailboxes belonged to a small consulting company that was purchased by my company. The users getting these email boxes already had accounts on our Exchange 2010 server and these imported accounts would be used to continue any business correspondences still associated with the old company.

So we gave the user’s send as and full access to their imported accounts and let auto-map do its magic. The issue we ran into was that while they could see the imported mailbox all new emails did not show up even though the unread count was increasing. When we checked via EWS, Active Sync, and Outlook Web Access the new items were visible.

After some poking around we noticed that while the folders and mail items were imported into an Exchange mail box, they retained the folder views associated with an IMAP account. The default view  of an IMAP account is to Hide Messages Marked for Deletion, but what it actually does is filter all the messages with the IMAP status of Unmarked.

The idea is to hide any IMAP messages marked for deletion that haven’t synced up with the IMAP server. Since Exchange messages do not have this field they would not show up with this filter applied. If the view is changed to IMAP Messages, which applies no filter, then all the messages show up. You can even apply this view to all the other mail folders. But a more elegant solution would be to remove the views all together, especially if you have hundreds of mailboxes having the same issue.

There are two ways I found to do this, one is a manual process that can only correct 1 folder at a time. The other is intended to correct mailboxes by the batch, but can also be applied to 1 mailbox

First method (Manually change each folder in a mailbox)

Using MFCMAPI you can change each email folder attribute from an IMAP designation (IPF.Imap) to an Exchange designation (IPF.note). MFCMAPI requires you to have access to the mailbox in question and a mail profile setup to access it (you can create one on the 1st run of the program). So you can either run the application from the user’s profile or a profile with access to the account:

  1. Start the program and login to your mail profile by going to Session -> Logon and select the profile that has access to the mailbox you need to edit. Once connected highlight that mailbox, <left click> and select Open Storefrom the drop down menu
  1. Once in the store navigate to the Top of the Information store. Depending on if this is mailbox is the default for the profile or added to it (via the Full Accesspermission) the folder tree is slightly different
    1. Default for profile : IPM_SUBTREE
    1. Secondary : Root Container -> Top of Information Store
  1. Now highlight the mail folder in question and look for the Property named PR_CONTAINER_CLASS, PR_CONTAINER_CLASS_A, ptagContainerClass and <right click> and select Edit Property… from the drop down menu
  1. From here you can edit the ANSI entry, which you’ll want to change from IPF.Imap to IPF.Note. Then <click> the OKbutton
  1. Repeat for all the mail folders in the container and exit the program when you are done.

Second method (“Find and Replace” batch method)

You’ll need a program from Microsoft called EXFolders, which you will install and run from the Exchange server (see the readme instructions included in the download). The instructions on using this program have been re-purposed form the following TechNet post answer provided by Kevinrk:

  1. Run EXFolders directly from the Exchange Bin folder
  1. Go to File -> Connectand fill the following fields:
    1. Type : Mailboxes
    1. Connect by : Database
    2. Global Catalog Server : Select your GC
    3. Databases : Select the Mailbox Database you want to work on
  1. Now select either the entire Mailbox Database or an individual mailbox you want to correct and select Tools -> Custom Bulk Operation.
  1. In the window labeled Custom Bulk Operation look for the section labeled Overall Filter and enter in the following string to make sure that only the mail folder container class is changed :
    • (&(0x361300iE=IPF.Imap))
  1. Under the section labeled Operations <click> the Add button and then select Other folder properties in the Operation Typewindow
  1. In the Folder properties operation window set the following options
    1. Action : Modify
    1. Property : PR_CONTAINER_CLASS : 0x3613001E
    1. Value : IPF.Note
  1. Once those options are set the Add button and then the OKbutton
  1. When you’re back to the Custom Bulk Operation window <click> OK to run the bulk operation. From here Exfolders will walk through the mailbox container(s) and change each instance of the PR_CONTAINER_CLASS from IPF.Imap to IPF.note

What’s next?

For either method you should have the user restart Outlook to make sure the changes take place. In some cases the IMAP views persisted and I had to reset the user’s views in Outlook (outlook.exe /cleanviews).

Posted in Email, Exchange 2010, IMAP, Outlook | 1 Comment

How to quickly disable account access in AD and Exchange 2010

While testing the feasibility of a Bring Your Own Device policy with Exchange 2010 Active Sync we noticed some odd behavior with disabled accounts.

One of the policies we decided on was that during an employee termination we would disable sending and receiving from an ActiveSync device before we removed Active Sync or wiped the device. The idea was that this would give a terminated employee time to make any personal phone calls before handing their personal device over to IT so we can remove the ActiveSync account. If they refused to hand it over we would wipe the device instead.

In testing we originally thought it would be enough to disable the AD account and reset it’s password to force propagation of the account throughout the forest. To our surprise though the disabled account could no longer access network resources it could still send and receive emails via Active Sync. Furthermore the account could also login into Outlook Web Access on both the old and new password. This behavior could sometimes last for hours!

After some research and a little help from the TechNet Community I found that the behavior stems from cached access tokens in IIS. Since both OWA and ActiveSync (also EWS) use IIS, which will cache access tokens for up to 15.  In my environment (and a few others) the cached tokens last for a few hours so I’m not sure what other factors are at play in keeping it alive longer then the 15 Minutes interval. One way to rest the token it is to restart IIS, but that is a little extreme as it will flush out all access tokens and active connections.

One of the various methods mentioned in the TechNet forums was setting the Allowed Recipients to 0:

Set-Mailbox -Identity "John Smith" -RecipientLimits 0

Obviously this allows the user to still access OWA, ActiveSync, and address books; but it stops them from sending any nasty emails through their disabled account after the fact. I also tried setting the Storage Quota to 0 for sending messages but that didn’t seem to apply in a timely fashion (15 mins). Setting the recipient account was almost instantaneous and works during an OWA session

I then tried to see if I could force a IIS Token refresh by changing the password
of a disabled account and then logging in with the new password. This had the strange side effect of caching 2 IIS tokens, one that worked with the old password and one that worked with the new one!

Over all the best method was to disable OWA and ActiveSync on the user account:

Set-CASMailbox-Identity "John Smith" -OWAEnabled:$False
 Set-CASMailbox-Identity "John Smith" -ActiveSyncEnabled:$False

This worked within 5 minutes and successfully locked out the account from both services.


Posted in Active Directory, Exchange 2010, Windows | Leave a comment
  • Archives

  • February 2017
    M T W T F S S
    « Jan    
  • Page list