How to quickly disable account access in AD and Exchange 2010

While testing the feasibility of a Bring Your Own Device policy with Exchange 2010 Active Sync we noticed some odd behavior with disabled accounts.

One of the policies we decided on was that during an employee termination we would disable sending and receiving from an ActiveSync device before we removed Active Sync or wiped the device. The idea was that this would give a terminated employee time to make any personal phone calls before handing their personal device over to IT so we can remove the ActiveSync account. If they refused to hand it over we would wipe the device instead.

In testing we originally thought it would be enough to disable the AD account and reset it’s password to force propagation of the account throughout the forest. To our surprise though the disabled account could no longer access network resources it could still send and receive emails via Active Sync. Furthermore the account could also login into Outlook Web Access on both the old and new password. This behavior could sometimes last for hours!

After some research and a little help from the TechNet Community I found that the behavior stems from cached access tokens in IIS. Since both OWA and ActiveSync (also EWS) use IIS, which will cache access tokens for up to 15.  In my environment (and a few others) the cached tokens last for a few hours so I’m not sure what other factors are at play in keeping it alive longer then the 15 Minutes interval. One way to rest the token it is to restart IIS, but that is a little extreme as it will flush out all access tokens and active connections.

One of the various methods mentioned in the TechNet forums was setting the Allowed Recipients to 0:

Set-Mailbox -Identity "John Smith" -RecipientLimits 0

Obviously this allows the user to still access OWA, ActiveSync, and address books; but it stops them from sending any nasty emails through their disabled account after the fact. I also tried setting the Storage Quota to 0 for sending messages but that didn’t seem to apply in a timely fashion (15 mins). Setting the recipient account was almost instantaneous and works during an OWA session

I then tried to see if I could force a IIS Token refresh by changing the password
of a disabled account and then logging in with the new password. This had the strange side effect of caching 2 IIS tokens, one that worked with the old password and one that worked with the new one!

Over all the best method was to disable OWA and ActiveSync on the user account:

Set-CASMailbox-Identity "John Smith" -OWAEnabled:$False
 Set-CASMailbox-Identity "John Smith" -ActiveSyncEnabled:$False

This worked within 5 minutes and successfully locked out the account from both services.

 

Posted in Active Directory, Exchange 2010, Windows | Leave a comment

How moving your Default Domain Policy from the default location can lead to trouble

At my company we recently decided to change our password policy by increasing the password age to a year but doubling the length. So we planned to change the password age via Group Policy so that everyone would have to change their password by the 2nd week of January. Once everyone had changed their password we would push the age to 365 days. But try as we might the policies in our Default Domain Policy just wouldn’t take.

In trouble shooting we decided to use the Group Policy Results in the Group Policy Management Console. Oddly enough it showed that the changes were applied to the computers we tested against. At this point we thought maybe something was blocking Group Policy updates. So we used SPECOPS: GPOUPDATE utility to do a batch run of the gpoupdate /force command on the Active Directory OU that house our client PC’s. So we used the following power shell command from the Quest ActiveRoles Management Shell for Active Directory to find out the password expiration date of a few users who were properly getting our GPO’s:

Get-XADUserPasswordExpirationDate USERNAME

What we noticed was that their expiration date was still reflecting the old policy. So we decided to see if we could use PowerShell to edit the Default Domain Password Policy, and you can! So we ran the following command to get what PowerShell was reporting as the Default Domain Password Policy

Get-ADDefaultDomainPasswordPolicy

The output confirmed that though we modified the Default Domain Password Policy that the Group Policy was still retaining the old settings. At this point I was baffled, that is until I took a hard look at the Group Policy Management Console and noticed something blaringly obvious. The Default Domain Policy was not at the root of the domain but at the root of the OU we created for our office location. Once I moved the Default Domain Policy to the root of the domain and re-ran the command I saw the new settings take effect. Which is similar to how a PC will keep the last applied GPO settings when disjoined from a domain, the root of our domain retained the last applied GPO settings applied at the root level after the Default Domain Policy was removed.

This is also when I ran into another head slapping obvious moment. After the policy was moved to the proper location and was reapplied to the OU that our client computers were located in, the updated password policy still wasn’t being applied to all our users. This is because all users authenticate to a domain controller and that is where the policy takes effect, not at the client PC. Once applied we were able to check the password age of a few user accounts and see that the new settings did take effect

So the take away

  1. The Default Domain Policy must always be at the root of the domain
  2. If possible avoid using the Default Domain Policy all together and use separate polices to push down settings.
  3. Since domain users authenticate against domain controllers, password policy settings must be applied to the domain controller and not the client PC’s

 

Posted in Active Directory, Group Policy, Windows | Leave a comment

How to force OSX using Active Directory authentication to un-cache a mobile account’s username when it is changed in Active Directory

One of the situations we deal with a lot at my company is name changes due to marriages/domestic partnerships. Recently we had to perform a name change for one of our Mac users. This entails changing the following in Active Directory among other things:

  • Common Name (e.g. John Doe
  • Display Name (e.g John Doe)
  • samAccountName (e.g. jdoe)
  • userPrincipalName (e.g. jdoe@domain.net)

What we noticed is that our Active Directory bound macs won’t update the changed username of an account if it is set up as a mobile account in OSX. A mobile account allows offline logins of network accounts by caching login credentials and is turned off by default. From what I can tell this is due to the fact that once the mobile account caches the AD login information it doesn’t change it. This results in the user not being able to login in under either the old or new username on a Mac in which the user has logged into with a mobile account before the name change. Any new logins/mobile account creations work fine on any other AD bound Mac. After some research this is the best method we can across to force this change:

  1. Enable the root account if it is not already enabled
    1. In 10.5-10.7 open the “Directory Utility” either from System\Library\CoreServices or System Preferences -> Accounts -> Login Options -> Edit -> Open Directory Utility
    2. From the “edit” menu choose “Enable Root User”
    3. Enter in the a password for the root account
  2. Log out and login as root
  3. Once logged in turn on hidden files from the terminal
    1. defaults write com.apple.finder AppleShowAllFiles true/false
    2. killall Finder
  4. Browse to /var/db/dslocal/nodes/Default/users
  5. Look for the plist file associated with the old user account
    1. Make a copy on the desktop just to be safe
    2. Rename the file to match the new user name
    3. Do a find/replace to change the user name
    4. Save the plist file
  6. Go to the Users folder and update the name of the home folder
  7. From the terminal run the following command to verify that the new plist file is recognized
    1. Dscl . list users
    2. If not then double-check the plist file name
  8. Log out and then login under the new account name and verify everything works.
    1. You may also need to reset the Keychain as well.
  9. Go back to the “Directory Utility” and disable the root account
Posted in Active Directory, OSX | Tagged | 2 Comments

Take care with SAN Certificates in Exchange 2010 when Outlook Anywhere is enabled with PC’s Running XP and Vista

The last pain point of our Exchange 2003 to Exchange 2010 upgrade was getting Outlook Anywhere working with our XP clients, the cause of which was due to my inexperience with security certificates.

When we originally reviewed our requirements for Exchange 2010 we decided to go with a 20 slot Subject Alternative Name (SAN) Security Certificate. The reason we needed so many slots was due to the various outgoing domains we wanted to support. Also I was under the wrong assumption that I could use this certificate for other non-Exchange purposes, which you can’t.

When picking the Common name (CN) we decided to go with the root domain of our main company. So the final outcome of our SAN certificate was:

Common Name: MainCompany.com
Subject Alterative Names

    • Mail.MainCompany.com
    • Webmail.MainCompany.com
    • Autodiscover.MainCompany.com
    • Legacy.MainCompany.com
    • Mail.SecondCompany.edu
    • Webmail.SecondCompany.edu
    • AutoDiscover.SecondCompany.edu
    • Legacy.SecondCompany.edu
    • Mail.ThirdCompay.net
    • Webmail.ThirdCompay.net
    • AutoDiscover.ThirdCompay.net
    • Legacy.ThirdCompay.net
    • SecondCompay.edu
    • ThirdCompany.com
    • Maseradedomain1.com
    • Maseradedomain2.com
    • Maseradedomain3.com

This certificate functioned as needed until we decided to roll out Outlook Anywhere a few months into our upgrade. What we noticed is that our XP PCs running Outlook 2007 & 2010 couldn’t connect to our Exchange 2010 server using Outlook Anywhere. Our Vista PCs running Outlook 2007 and Windows 7 PCs running Outlook 2010 had no problem using Outlook Anywhere. When we checked the Exchange Remote Connectivity Analyzer we didn’t see any glaring issues. After playing with various Outlook Anywhere settings and failing we found was that XP, and Vista clients below SP1, can only read the CN on the and not the Alternative names on a SAN Cert. This normally wouldn’t be a problem if our CN could point to an Exchange server hosting the Client Access Server (CAS) role. This can be done be specifying the CN in the MSSTD, but since we used our root domain of our main common as the common name that wouldn’t help us.

Our only option would be to change the CN on our SAN cert. In order to do so the certificate needed to be revoked and then create a new certificate with the proper CN. Most Certificate Authorities will let you change the CN up to 30 days after the purchase date. But since we didn’t roll out Outlook Anywhere until a few months into out Exchange 2010 Upgrade we had to buy a new SAN certificate all together. So this time we made sure the CN was the external address of our CAS server, which in our case was also our Mailbox and Hub Transport as well. Since our company mailboxes are now fully migrated off Exchange 2003 we no longer need the legacy domains, so a 15 slot SAN cert was purchased this time

Common Name: mail.MainCompany.com
Subject Alterative Names

    • Webmail.MainCompany.com
    • Autodiscover.MainCompany.com
    • Mail.SecondCompany.edu
    • Webmail.SecondCompany.edu
    • AutoDiscover.SecondCompany.edu
    • Mail.ThirdCompay.net
    • Webmail.ThirdCompay.net
    • AutoDiscover.ThirdCompay.net
    • MainCompany.com
    • SecondCompay.edu
    • ThirdCompany.com
    • Maseradedomain1.com
    • Maseradedomain2.com
    • Maseradedomain3.com

We did the certificate swap over the weekend since we weren’t 100% sure if it would cause issues with Webmail, Activesync, etc. Luckily no major service disruptions happened once we assigned services to the new certificate. The only noticeable effect was an informational pop up on our Macs running Outlook 2011. The pop-up asked to confirm that Outlook 2011 was going to accept the new configuration information from out Exchange Server. Once assigned our XP PC’s running Outlook 2007 or 2010 could successfully access Outlook Anywhere.

Posted in Exchange 2010, Outlook, Outlook Anywhere | 2 Comments

Moving large VMDK’s from one Windows File Server to another in vSphere 4.X while maintaining share information

Moving large VMDK’s from one Windows File Server to another in vSphere 4.X

At my Job we are nearing the end of our W2k8R2 upgrades. The only W2k3 servers left are our file servers. In planning to upgrade these W2k3 file servers we decided to create the new W2k8R2 file servers in advanced and during scheduled down time migrate the VMDK’s holding the shares to the new VM.  An overview of the plan was as follows:

  1. Create a new W2K8R2 VM, called NewFileServer
  2. On FileServer (W2k3) remove the virtual hard drive holding the share info
  3. Move the VMDK to the datastore folder of NewFileServer and attach it to NewFileServer
  4. Verify data is ok, if so rename FileServer  to OldFileServer and rename NewFileServer to FileServer
  5. Recreate shares and verify access works on FileServer (W2K8R2), if so remove OldFileServer

Below are the actual steps

  1. Ensure a recent backup of the W2k3 source VM has been made
  2. Remove all snapshots from the W2k3 source VM
  3. Export Share information from the Registry of the W2K3 source VM
    1. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Shares
      1. This contains names and security for all types of shares (printers, files, etc)
  4. Under Windows uninstall the hard drive containing the share data and power down the W2K3 source VM
  5. In the vSphere client remove the virtual hard drive from the W2K3 source VM
  6. While still in the vSphere client go to the Datastores and move the VMDK to the proper datastore folder where the W2k8R2 target VM is located
  7. In the vSphere client add the new disk to the W2k8R2 target VM
  8. Go to Disk management and bring the disk online
    1. It may show up as a foreign disk that needs to be imported
  9. Verify file data is intact
  10. Rename or dis-join the old VM from the domain and change IP from static to DHCP
    1. The AD account should remove itself if renamed.
    2. If disjoined the AD account will rename but the AD account cannot be reset,  it will need to be deleted and a new one will need to be created
  11. On the new VM rename it’s Window’s computer name to match the old VM, and change IP to static and then restart
  12. Once rebooted Import the exported registry file and restart
  13. Or just restart the Windows Server Service
  14. Verify that the shares work

House Keeping

This doesn’t need to be done but it helps make sure the VM files and Datastore folders match the VM’s AD name

  1. Shutdown and remove both VM’s from the inventory of vCenter (DO NOT DELETE THE VM’s)
  2. Browse to the datastore and create a new folder for the old VM
  3. Move all the files of the old VM to the new folder
  4. Move all the files of the new VM to the old VMs folder, delete the now empty new VM folder
  5. Login to the to a host that has access to the Datastore
    1. Login into the ESXi host via SSH
      1. If SSH is not on then login into ESXi console of the host you’re working with
      2. Turn on SSH access
        1. Set the SSH timeout for 30 or 60 minutes so you don’t have to remember to turn it off
    1. Browse to the data store, eg.
      1. cd "/vmfs/volumes/NX4-SATA-RAID5-01/Intranet/"
        1. The CLI is case sensitive
    1. Find the VMDK in question and rename it by
      1. vmkfstools -E mail2_2.vmdk intranet_2.vmdk
      2. The Flat file will get renamed once the vmdk is renamed
    1. Verify the file was renamed by running ls- l
    1. The various other files can also be renamed in in the CLI as well, e.g.
      1. mv vm-old.vmx vm-new.vmx
      2. This needs to be done for  the following file types
        1. NVRAM
        2. VMSD
        3. VMX
        4. VMXF
      1. Description of those files if you are curious
    1. Stay logged in on the off-chance you need to reenter the SSH, otherwise logout
  6. Once finished browse to the datastore and rename any non VMDK file that still has the old name
  7. Re-Add both VMs to the inventory
  8. For both VMS, In the VM settings remove the entry for the OLD VMDK and add the renamed VMDK (make sure it’s SCSI (0:0) Hard disk 1)
  9. Turn both on VMs, when prompted select “I moved it”
  10. Verify both still function.
  11. Check any auxiliary info (Repoint Backup Agents etc.)
  12. Log out of the SSH session if you already haven’t
Posted in File Shares, VMware, W2K8R2, Windows | Leave a comment

Recent PC upgrades and component shuffles

Recent PC upgrades and component shuffles

A few weeks ago I finished upgrading my current PC setup, the old set was:

Daily PC

  • OS : Windows 7 Ultimate
  • Case : Antec 900
  • CPU : Intel i7-920 with stock cooler
  • Mobo : Intel DX58S0
  • RAM : 3 x 2GB Patriot  DDR3 1333 (7-7-7-20)
  • GPU : AMD ATI Radeon HD 5580
  • Storage
    • System : 2 x 150GB Western Digital Velociraptor in RAID 0
    • Data : 1TB Western Digital Caviar Green

Development Sever

  • OS : VMware ESXi Free 4.1
  • CPU  : Intel Core2Duo Q9600 with stock cooler
  • Case : Generic
  • Mobo : Asus Stryker Extreme
  • RAM : 4 x 2GB G.Skill DDR3 1333 (9-9-9-24)
  • GPU : ATI Radeon X200
  • Storage
    • System : 1 x 1GB PNY USB flash drive
    • Data : 3 x 500GB Western Digital Caviar Blue

The new Setup is:

Daily PC

  • OS : Windows 7 Ultimate
  • CPU : intel i7-2600K with Cool Master Hyper 212 Plus cooler
  • Case : NXT Phantom (White)
  • Mobo : ASrock Z68 Pro3
  • RAM : 2 x 4GB G.Skill Sniper DDR3 1600 (9-9-9-24)
  • GPU : AMD ATI Radeon HD 5580
  • Storage
    • System : 1TB Western Digital Caviar Black
    • Data : 75GB Intel x25 SSD with 20GB dedicated to Intel SRT
    • Backup : 1TB Western Digital Caviar Green

Development Sever

  • OS : VMware ESXi Free 5.0
  • Case : Antec 900
  • CPU : Intel i7-920 with stock Cooler
  • Mobo : Intel DX58S0
  • RAM : 4 x 4GB G.Skill RipJaws Series DDR3 1333 (9-9-9-24)
  • GPU : Zotac GeForce GT 220
  • Storage
    • System : 1 x 2GB Sandisk Ultra 2 SD card
    • Data :
      • 3 x 500GB Western Digital Caviar Blue
      • 2 x 150 GB Western Digital Velociraptor

The Core2Duo system replaced my mother’s aging AMD Athlon XP 1600+ PC I built over 6 years ago. My new daily PC uses the latest Intel chipset (Z68) which had two features I was really interested in:

Intel’s Smart Response Technology (SRT)

SRT lets you use an SSD (20GB or more) as a smart cache partition that will cache your frequently accessed data on the SSD.  I haven’t done much testing with it but so far my PC has booted much quick and loads levels in Portal 2 faster than my old setup. But I don’t know how much of that to attribute to the UEFI Bios (boot Times) or the fact that my earlier setup ran “hand me down” Velociraptor’s that sounded and felt like they were constantly seeking (overall file access speed). AnandTech has a great article on SRT. If you plan on using the SRT just make sure your drives are set to RAID mode in the UEFI or you’ll have to reinstall Windows like I had to after you realize that.

Lucid Virtu’s Quick Sync

Quick Sync lets you use both your integrated and discrete GPU and delegate which tasks go to your which, letting you run both with a minimal performance hit going to Quick Sync’s overhead. So you could for example encode a video with the onboard GPU and play a game with your discrete GPU. So far I haven’t really leveraged the technology and have spent more time fighting it, which was mostly due to my limited knowledge with it. A few things that tripped me up at first are listed below; once again AnandTech had a great article had that helped me through my issues :

  1. To decide which GPU is your default you hook up your monitor to the desired GPU’s display port. So to have everything go to the discrete GPU and offload encoding jobs to the onboard GPU you need to use the display ports on your discrete GPU and vice versa
    1. I haven’t tired it yet but I assume if you want multiple monitors they all need to be on the same GPU when using Quick Sync
    2. When testing with the onboard GPU as my default I had a lot of issues keeping Windows Aero enabled. Also the AMD catalyst Driver would state that it couldn’t find a suitable GPU to work with even though it worked just fine once Quick Sync enabled it. So I switched to the discrete GPU and those problems went away.

The part swapping and data migration between the 3 systems went good overall except for the tail end when I was preparing to finally make the i7-2600K my main system. After researching airflow setups I noticed that I had the Cool Master Hyper 212 Plus on the wrong way. Its fans were pulling air off the GPU and pushing it out through the top of the case, it was supposed to pull air from the front of the case and pushing it out the back. So I removed it and put it the right way, at the same time I installed a PCIe wifi card. Little did I know I had bent a pin on the CPU socket when putting the CPU cooler back on, so when I tried turning on the system it wouldn’t boo t. Because I put in the PCIe wifi card in as well I thought this was the issue, but after removing it the issue remained. So I fell down a 2 hour troubleshooting hole swapping out every component in the system except the CPU, what I eventually found out was that the system would only boot if the RAM was in single channel mode.  It wasn’t until after I contacted ASrock support for a RMA and packed the motherboard that I noticed a bent pin on the CPU socket. Out of curiosity I looked up the pin out diagram for the chipset and bent pin was associated with the DDR3 channels. Luckily ASrock didn’t catch the bent pin and sent me a new one in the mail about 2 weeks later. Once I reinstalled with all the components it booted up just fine!

In the process of putting together my new PC I came across some interesting personal revelations:

  1. Either the Cooling on my i7-920 setup was horrible inefficient and/or the HD 5800 is an extremely hot card. When I was building the i7-2600K I was still using the i7-920 as my daily PC, and when I replaced the HD 5800 in it with the GT 220 I was amazed how much cooler my office got. Now that I’m using the HD 5800 in my i7-2600K setup my room doesn’t feel nearly as hot as when it was in my i7-920 setup. This leads me to believe it was more of a cooling issue since my new case has an obscene amount of fans (2 x 200mm, 1 x 140mm and 5 x 120mm) and enough space to do proper cable management.
  2. All this time I had no clue how to tell which side did what on a case fan until a friend told me to check the side of the fan housing. Turns out there are arrows that show the directional rotation of the fan and which way air travels through the fan. If they aren’t there then air flows into the open end and goes out fan grill end.
  3. While the NXT case gave me plenty of room for unobstructed airflow and cable management it doesn’t fit under my desk as well as the Antec 900 did, especially when it comes to accessing the top mounted USB ports. Also the Cool Master Hyper 212 Plus was so tall that it bumped against the 3rd side mounted 200mm fan I wanted to install so I may get a lower profile CPU cooler in the future.
  4. I still have no clue how to properly setup or use front mounted headset and microphone ports. For some reason I can never get them to work like I want to: speakers using the back ports and the microphone using the front ports.

 

 

Posted in Intel SRT, Lucid Virtu, Personal | Leave a comment

Using SQL Server Express with vCenter? Be mindful of you database limits.

A little over a month ago we had issues with our VMware vCenter server (4.1 running SQL Server Express 2005) locking up every 12- 18 hours. After looking at the event log we saw the problem: Event ID 9002 stating that the Transaction log for database ‘VIM_VCDB’ was full.

A quick web search revealed that there is a 4GB size limit in databases used by Microsoft SQL Server 2005 Express, and sure enough we were at that limit. At first I was curious how our database got so big since our VMware environment consists of just 3 hosts and 18 VMs. Then I remembered a few weeks earlier we did some load testing for an upgrade to the package our accounting server runs. During the load testing we set the 5 minute and 30 Minutes intervals to Statistics Level 4

At first we tried lowering intervals down to the level 1 but since we were already at the limit we couldn’t add any more data.  VMware had a knowledge base article (VM KB1025914) describing how to purge old data from the database. The process required Microsoft SQL Server Management Studio Express to run some scripts to purge the excess data. As a side note I why it wasn’t redistributed with the vCenter install or recommended to download during the install.

We ran the scripts and purged the database from 180 days down to 90, but it barely removed more than a few MB’s of data.  So we keep pruning down until we hit the 20 day mark, at that point the purging script would run for hours and we eventually canceled it. So the majority of the data was in the last 20 days, which makes sense seeing that we were collecting the highest statistics level for the first two interval durations.

Luckily VMware had a fail back referenced in the very same KB article: KB1007453. What we had to do is truncate the first stats table (VPX_HIST_STAT1) in our VM_VCDB database, which was a bloated 3533MBs with 16586387 rows of data. The script itself was very simple:

VPX_HIST_STAT1 : truncate table VPX_HIST_STAT1

Once completed our database shrunk down to a few 100 MBs and the lock-ups went away. As a side note our Installation of vCenter Operations wasn’t affected by table wipe since it keeps its own record of all the performance data it pulls from vCenter.

Out of curiosity we looked up the database file size limits on SQL server 2008R2 express, which is now 10GB. As for vCenter, support for 2008R2 express was added in vCenter 4.1 Update 1. It also was the last version to support 2005 express,  2008R2 will be the only express version vCenter 5.0 supports.

 

Posted in SQL Server 2005 Express, vCenter 4.X, VMware | Leave a comment

Could webOS be HP’s “New Coke”?

I have zero marketing experience and this is just me thinking (typing?) out loud. What if HP saw the lackluster success of the Motorola Xoom, Samsung Galaxy Tab, Blackberry Playbook, and their own HP webOS Tablet in comparison to Apple’s iPad runaway success and saw a brilliant but unorthodox opportunity.

Weathering the storm with the HP webOS tablet like all every other iPad competitor was going to cost them a lot time, money, and resources. The fight in the tablet market for 2nd place in is just as hard (if not harder) then the fight for 1st with none of the perks since the market share for the #2 spot is so drastically small compared to the #1 spot. What if HP could take a relatively small hit money wise (compared to sticking with the device over the next 6 to 12 months) by liquidating the current stock at a bargain price to create an “Apple-like” hysteria for the product. The plan works and suddenly everyone wants one and HP gets their entire stock of tablets into the hands of the consumer, which is unheard of for any company in the tablet market that isn’t Apple. Over the course of a few days the market penetration of webOS increases exponentially.

Now here comes the risky pay off, what if people actually like webOS? What if this creates a demand in the market for more webOS devices? HP still owns the rights to webOS, so does it:

  • Sell the rights to webOS to hopefully break even, banking on the recent explosive user adoption will make it attractive to hardware manufacturers like Samsung, HTC, LG, etc? All of whom in light of Motorola/Google and Microsoft/Nokia deal are probably worried about their current ability to successfully sell  Android and (to a lesser extent) Windows Phone 7 devices?
  • Retain the right and then license it out to these same manufactures in hopes of turning webOS profitable?
  • Sit on the rights and then release the “HP webOS tablet 2” next year? Hoping that the new-found popularity will give them an easy 2nd place win in the tablet market if they can keep a sub $200 price point?

Hopefully in the next few months we’ll see how this plays out. At the very least this gives Amazon, with their forth coming tablet, some valuable insight into how to create an insatiable demand for your product.

Posted in HP, Thoughts, webOS | Leave a comment

How custom Outlook calendar forms can cause numerous headaches

We recently migrated from Exchange 2003 to Exchange 2010. Minus a few configuration issues the migration was without any major issues. But a within a few months of the final mailbox move a few of our Mac users running Outlook 2011 started getting errors when accepting meeting invites. If they replied with either an acceptance, decline, or tentative the meeting would get added to their calendar but the following error would be generated by Outlook 2011:

HTTP error. The server cannot fulfill the request
Error code: -18500

The response email would remain in the Outlook 2011 sent folder and the meeting organizer would never receive a response, so the affected user would show up as having not responded in the scheduling assistant of the organizer’s scheduling assistant. The same issue would occur in Outlook Web Access, but with a different HTTP related error. This problem wasn’t encountered by Outlook 2007 and 2010 users so we believed it had something to do with the web services in our Exchange 2010 setup since both OWA and Outlook 2011 rely on the same services. But the one thing that bothered me was that the issue wasn’t affecting all our Mac users running Outlook 2011, only 3. So I went over our Auto Discover, Web services, and SSL certificate settings but didn’t find anything that would cause this issue. Web searches on the error code and message didn’t point to anything worthwhile either.

At the same time I was testing Room Mailboxes in Exchange 2010. Our company has relied on Public folders to schedule out various conference rooms and I was hoping to introduce Room Mailboxes to alleviate the pain points our users were facing with the current method. So I created two test rooms and tried booking them, strangely enough they would never accept my invites. I double checked the booking auto attendant settings, change registry settings on my computer in Outlook 2010 to force auto booking, and played with the resource policy settings on the mailboxes with no positive results. When I logged into the room mailboxes all my invites were sitting in the inbox. Out of curiosity I tried via OWA and it worked, then I tried via Outlook 2011 on my Mac and that worked. At that point I figured there had to be something weird with my Outlook 2010 setup. My first thought went to our new VOIP phone system.

About 2 months before our Exchange 2010 migration we upgraded our 20 year old phone system with a VOIP Shoretel System. One of the nice features was a software component (Shoretel Communicator) that would allow us to access all the features of our desk phones through our computers. In addition it had multiple Outlook integration features, one of them being calendar integration that would automatically change your call handling (e.g. go straight to voicemail) if you were in a meeting. This option added two extra buttons to the new appointment form allowing me to set various calling handling features for each meeting I scheduled. I was starting to think that this was the culprit, so I uninstalled the calendar integration but the two extra buttons remained. I then tried uninstalling the client, rebuilding my Outlook profile, and then singing into another computer (we use roaming profiles, but my account doesn’t). But those two extra buttons remained in my new appointment form. Strangely enough these two extra buttons only appeared when creating a new appointment in my calendar folder, but not any other calendar folder I had access to. So appointments created from shared calendars did not produce the error when sent through Outlook 2010, only my appointments with the two extra buttons did.

The final smoking gun was when I schedule a meeting in OWA when one of the Mac users running Outlook 2011 who was suffering from the meeting invite error. This user was able to accept my meeting invite when they previously couldn’t, so I tired sending this user another meeting invite via Outlook 2010 and the error returned. I then contacted our Shoretel re-seller and after some searching of the Shoretel support forums we found a few posts that pointed us to a possible issue. The Shoretel calendar integration replaces the default calendar form (IPM.Appointment) with a new one (Shoretel Appointment). This form has some added settings that the Shoretel system uses to set the users call handling, but this form also has some changes that Outlook 2011, OWA, and resource mailboxes don’t like. So we went around to each Windows user and changed the calendar form back to the default one and we are currently awaiting a resolution from Shoretel.

Looking back at the issue I think that in our environment this form was stored in each user’s Personal Form Library which is a hidden item in the root folder of the user’s mailbox (KB290802). This would explain why the issue followed my account even though I didn’t have a roaming profile. I assume this is the default location when a Organizational Forms Library isn’t present.

Posted in Exchange 2010, Outlook, Shoretel | Leave a comment

Google to Motorola, these ARE the droids we’re looking for!

 

A few small observations and questions in the wake of Google’s announcement to acquire Motorola’s Mobility division:

  1. Each major mobile OS now has its own dedicated hardware manufacturer.
  2. This purchase gives Google a huge influx of patents. Are they hoping this new patent cache would scare off any new copyright lawsuits against Android?
  3. Why not HTC, the maker of the G1 and Nexus One? Did they not have enough intellectual property? Was this Google lashing out in the recent Patent wars with the extra bonus of having a dedicated handset and tablet maker?
  4. While on the subject of HTC, what happens to the all the agnostic handset makers? Samsung, HTC, LG, etc. probably didn’t worry too much when Microsoft partnered with Nokia but they must be worried about their mobile handset plans after this announcement
  5. On a recent Engadget podcast it was mentioned how most non-tech enthusiast refer to Android handsets as Droid handsets, now at least they won’t be wrong.
Posted in Android, Google, Motorola, Thoughts | Leave a comment