Tuesday, August 25, 2015

How To: Office 365 Group Administration #Office365

At Ignite this year Microsoft finally released a set of cmdlets that can be used to manage groups within your tenant. Here is a quick primer on the available cmdlets that can be used to manage group administration within your company. I'll add additional cmdlets and administration tips as I run across them.

After connecting to Exchange Online you can use the Help unifiedgroup cmdlet to see a list of what is available. This is a great way to see quickly what is available to you within the shell to manage your groups.

The Get-UnifiedGroup cmdlet will help to bring back more information about a group.  For instance, you may want to display a listing of all the groups within your tenant along with the alias. To do this you would need to use the following cmdlet (Get-UnifiedGroup | ft Displayname, Alias):

A new group can be created within PowerShell using the New-UnifiedGroup  cmdlet. You can use the following cmdlet to create a new group called Cedar Park Sales with the alias of CPSales:
New-UnifiedGroup -DisplayName "Cedar Park Sales" -alias "CPSales"

The Add-UnifiedGroupLinks cmdlet is available to help add or remove members from an existing group.
Add-UnifiedGroupLinks CPSales -LinkType members -Links jharris, lryan

How do I restrict  group creation?

You can restrict who can create groups through the use of a new OWA mailbox policy. You may have a subset of users that have been identified that should not have the ability to create groups. You can handle this requirement with the following PowerShell cmdlets:
  1. New-OWAMailboxPolicy -Name DisableGroupCreation
  2. Set-OWAMailboxPolicy -Identity DisableGroupCreation -GroupCreationEnabled $false
  3. Set-CASMailbox -Identity lryan -OWAMailboxPolicy DisableGroupCreation

Be prepared to wait 5-10 minutes for the new OWA mailbox policy you created to propagate and take effect. In the cmdlets above we removed the right for the user lryan to create new groups. In the screenshot below you will see that this user had the ability to create a new group which is indicated by the plus sign.

After the new OWA mailbox policy is synchronized the ability to create groups for the user lryan has been removed as show below.

The administration story for groups is no where near complete and is definitely a work in progress. Microsoft understands the current state of group administration and is making great strides to improve the group experience for both the end user and for the administrator. 

Stay tuned for further updates this fall!

Wednesday, July 1, 2015

Did You Know: The Hidden Complexity of Intra-forest Migrations. #Office365

I’ve had numerous customers talk to me over the last several weeks about consolidating child domains into an empty root. In each of these cases the customer was looking to clean up old directories to prepare for a migration to Office 365. It is interesting to see how things come in waves. At any rate, I thought providing a quick primer on what happens during an intra-forest would be useful.

Each of these customers thought that the migration was dead simple and that they would be able to complete the migration in several weeks. After all, its not like the project will be dripping with complexity like migrating security principals cross-forest without a trust.

Ahh, the seemingly simple tasks that quickly take on a life of there own and become massive undertakings. I’ve had several of these types of projects around the house in my time and each one resulted in numerous unscheduled trips to the home improvement store.

Intra-forest migrations on the surface seem relatively simple, and they can be with the right planning. But with the wrong planning – these migrations can cause a lot of end user heartache.

How does this all work in an intra-forest scenario? Let’s review some of the basics here. For this discussion, I have a user account (adorsey) in a domain called na.contoso.com with a NetBIOS name of NA. Now, let’s say that we need to flatten this child domain into the root domain called contoso.com that has a NetBIOS name of CONTOSO.

If my account will be migrated to contoso.com then my account needs a new unique SID from the root domain. Why?

A common misconception among customers that I talk to is that the user object like adorsey can simply ‘move’ from a child domain to the root. In practice, though, that is not technically what happens (even if using ADMT). When collapsing a subdomain into a root domain (intra-forest) a new user account is created for adorsey in the root.

The reason that a new user account has to be created for adorsey is that duplicate Object-SIDs cannot exist in the forest. This presents a challenge for intra-forest migrations because that means that you cannot easily prepopulate security principals without effectively deleting the original accounts.

Since a new user account is created in contoso.com, a new SID must be generated for the user account and stored in the Object-SID property of the security principal. Each security principal can only have one Object-SID as this attribute is not a multi-value property.

So how can you see the Object-SID value of a user? Using the Active Directory Users & Computers tool is the easiest way to see what a users Object-SID is as shown below.

You can also use the dsquery command to show the Object-SID of a user. 

dsquery * -Filter "(samaccountname=adorsey)" -Attr objectSID

You may ask (like my customers) why can you not just inject the Object-SID from na\adorsey to contoso\adorsey?

Each SID that is stamped on new security principals is actually made up of several different items. First, the domain identifier portion of the SID is actually unique to the issuing domain. The first grouping of values contains information about the SID structure and domain membership.

The remaining values are arranged in a specific grouping almost like a telephone number. This grouping (or telephone number) contains the relative identifier (RID), which is handed out by domain controllers within the domain. Each domain controller is given a preset grouping of unique RIDs from a single domain controller that is configured with the RID Master FSMO role. This allows all domain controllers to have the ability to create new security principals and provide the RID when a new SID is created. This method also ensures duplicate RIDs are not handed out within the forest as well.

You can use the following command to see how many RIDs have been issued within your domain.

Dcdiag.exe /test:ridmanager /v

There are additional ways to use PowerShell to verify how many RID’s have been issued and what the highest number in the allocation pool is.

So if domain membership of a user object changes then the RID portion of the SID must also change to match the new domain. The domain identifier portion of a SID issued is unique to that domain as well. So the SID for contoso\adorsey has a different domain identifier then the original na\adorsey account. These are the rules of Active Directory. Period.

As you now see, you cannot just inject the Object-SID from na\adorsey to contoso\adorsey. So what happens with an intra-forest migration if Object-SID’s cannot be migrated?

Once the contoso\adorsey account has been created and stamped with a new Object-SID, the previous Object-SID value from the na\adorsey account is injected into the SID-History (sIDHistory) property of contoso\adorsey. The SID-History property can actually hold multiple entries and may already contain some values if you have migrated the account before!

So what is the best way to verify that previous Object-SID value from the na\adorsey account is injected into the SID-History (sIDHistory) property of contoso\adorsey? You can certainly use the Active Directory Users & Computers to see what values are stored in the sIDHistory attribute.

I like to use the dsquery command as it provides a nice way to copy or export the value. In the example below we see that the Object-SID value from the na\adorsey account is successfully injected into the SID-History (sIDHistory) property of contoso\adorsey after an intra-forest migration. 

dsquery * -Filter "(samaccountname=adorsey)" -Attr sIDHistory

To sum this up - each time a security principal moves to another domain, a new SID is generated. This SID is written to the Object-SID property and the old value is added to the list in SID-History. This of course happens only if sIDHistory is part of your directory synchronization project.

So when contoso\adorsey logs into the domain his Object-SID, values stored in sIDHistory, and the SIDs of each group he is a member makes up his access token. This token is provided whenever contoso\adorsey tries to access a file or share or any other NTFS-protected content.

Now I will point out that the field in which all these SIDs are stored within the access token is not an unlimited size. If the maximum number of SIDs stored in the access token is reached, the user may have problems logging into the network.

I’ve seen this situation of “token bloat” rear its ugly head within the scope of large-scale Exchange migration projects where RPC/HTTP is used as the client connectivity method. If you only have Windows 2012 or above domain controllers in your organization some of the token bloat issues have been relieved due to an increased default MaxTokenSize set. 
Regardless though, there is a maximum number of SIDs that can be stored in the access token for each user. 

So there you have it. I provided a basic primer for intra-forest migrations and explained how they can quickly become complex. The main point that I want to get across is that within an intra-forest migration ‘moving’ an account is actually a destructive process for the source account. That account is deleted and recreated in the target environment. This results in a limited window of opportunity to test and make sure everything works prior to migrating users. As with most projects, proper planning is required to achieve a successful migration.

Monday, May 11, 2015

2015 Ignite Review. Was it Really That Different? #MSIgnite #MSExchange #iammec #Office365

It’s been a couple of days since I’ve left Chicago and the Microsoft Office 365 and Azure haze is starting to lift! Ha. After some time reflecting on the week, I’ve left Chicago with a mix of emotions. This is the first time that Ignite/TechEd was a business trip and not a paid week of training for me. 

I spent the majority of time at my employer’s booth talking with customers and finding ways to solve problems. It is always extremely satisfying when you are able to help people and see his or her face light up during that “ah ha” moment. 

Love it.

Over the years we have come to view TechEd as a large Microsoft conference that has always included a certain degree of level 100 marketing material with a plethora of level 200/300 sessions. Oftentimes the level 300 sessions leaves much to be desired. But there are several members of the Microsoft delivery and product team that always over deliver and are incredible teachers. The complaints are “always” plentiful about the lack of seating at these sessions – always has been.

So what were my expectations for Ignite? Past history has taught me to expect more of the same. In the time leading up to Microsoft Ignite we all talked about expectations and everyone generally thought it was going to be more of the same – a larger TechEd - and I feel like it was.

The conference just did not feel that drastically different to TechEd in look and feel. The sessions around Exchange seemed to have the same look and feel of TechEd’s past. If the TechEd brand were still present in Chicago would anyone feel like they were at a different conference? I argue not.

Many in the Exchange community compared the Microsoft Exchange Conference (MEC) to Ignite and I DO NOT feel like that is really fair. The last two MEC’s delivered much better content than any TechED that I’ve attended over the past 10 years. 

The whole MEC experience had a carefully branded look and feel in 2012 and 2014. This was absolutely a celebration of messaging and the availability of Microsoft personal at these events was incredible given the lower attendance.

Personally, I felt like the original message from Microsoft that Ignite was consolidating MEC and the Lync Conference to be pure marketing and a message to ease the outrage from the ones that held MEC as sacred. I can empathize with that, as I too was upset upon learning of MEC’s demise.

But let’s realize that MEC was under the control of a gentleman that understood the Microsoft Exchange community and was part of it. Ignite on the other hand was a large corporate training event for 20,000 people. The consolidation of events was a purely financial one given the large costs to fly out product group teams and take them away from their day-to-day responsibilities several weeks a year.

What was wrong?

There have been a lot of criticism aimed at Microsoft over the delivery of Ignite and I agree with some of them. For instance, fellow MVP Gary Steere noted the lack of effort by Microsoft to limit the environmental impact of such a large conference. I totally agree when many of these items had been addressed by TechEd in years past.

Don Jones also pointed out how the dining staff used at Ignite was not the most professional and was a little bit harsh. I also saw this firsthand and thought it was incredibly unprofessional.
While Microsoft stated that they were consolidating MEC and Lync Conference into Ignite they certainly did not work to keep the UC groups in the same general areas! The synergy between the Lync and Exchange community has always been strong and was felt while at MEC and Lync Conference.

In talking with customers throughout the day many of them complained that the buses did not run throughout the day and trying to get a taxi was an hour-long proposition. This certainly added to my frustration when trying to get to other conference hotels during the day for customer meetings. This was certainly a fail in my book.

Let’s just call a spade a spade and all agree that the food was absolutely horrible. To make matters worse, on Monday the exhibitors could not actually sit in the same dining room as the attendees. Really! The outcry on this was so swift and strong that this was changed by Tuesday morning. I’m not sure the thought behind this as lunch and breakfast is a great time for Microsoft partners to catch up with customers without the noise of the expo floor.

What was positive?

There was evidence available that Microsoft did listen to a lot of the positive feedback funneled back from MEC. One item that I did take note of was the continued use of panel sessions moderated by Microsoft and independent voices like MVP’s. I liked to see the independent voice represented in several panel sessions and greatly appreciated this. Personally, I feel like the community could tolerate another 4-6 of these types of sessions next year. How about you?

Since this was Microsoft’s only conference for the year it was great to meet and spend time with all my MVP, MCM, Microsoft and partner peers during the week. A lot of long-term relationships and bonds are forged at events such as these.

Ideas for next year?

Going forward sessions for the tightly knit communities like Lync and Exchange should kept close together. It would be nice to provide independent brandings within the Ignite umbrella for tightly knit communities. This would go a long way to help foster the small community feel of MEC while at the large corporate backed Ignite conference. The blueprint for MEC, Lync Conference and MMS is plain for all to see!

More panel sessions please! I really like when the independent voice of MVP’s or MCM’s is mixed in with Microsoft employees. Who doesn’t love when a stock Microsoft answer is provided and someone pipes up and says “well actually…in my last engagement…” The conversation that ensues is incredibly engaging, genuine, and one can argue an amazing way to learn. More please.

My assumption is that the food can only improve. The food at TechEd has always been tolerable and actually surprisingly good given the scale and speed by which it is delivered.


Given that Microsoft already has long-term experience with delivering a highly technical conference at massive scale (i.e. TechReady) much larger than Ignite, some of these blunders were surprising.

At the end of the day though, smaller conferences like MEC have vastly different goals and vastly different measurements of success then Ignite. You cannot just simply take a boutique conference like MEC and scale it out to accommodate 20,000 attendees and expect to have the same look and feel. Removing the creative voices that come from each Microsoft community and forcing a corporate template across the board is not going to result in MEC – it is going to result in what we saw at Ignite. Meet the fiscally responsible corporate Microsoft.

The winners of the consolidation of TechEd is clearly the independent conferences that have maintained the boutique “MEC-like” feel and can drive deeply technical content that is free of a carefully crafted marketing message. 

Microsoft has played their card with Ignite and now it’s up to the independent voice to be heard and see how it measures up. I’m excited!

Friday, May 1, 2015

2015 Microsoft Ignite Giveaway! #MSIgnite #MVPBuzz #Office365 #Surface

You’re all like, “the tablet giveaway at MEC was awesome!” And we’re all like “why does the MEC bag still stink?” And you’re like “I wonder if Microsoft will give us another deal on Surfaces like 2013 because we all promised to stay off eBay.” And we’re like “We’ve got a Surface Pro 3 to give away for free!” And you’re like “Wow!”

Time for Gary and me to help upgrade your tablet or laptop! Are you looking for a new Microsoft Surface Pro 3? C’mon, you know that your eye has been on this great tablet from Microsoft for a long time! 

Yes, we’re serious, this contest is about a couple of MVPs (Justin Harris and Gary Steere) that love the community and are giving away a Surface Pro 3 that we bought with our own money.  A big thank you to our employer Binary Tree for helping with the logistics.  

How Do I Enter? We require just two simple steps! Let’s keep this contest simple and easy for everyone and have some fun with it. After all, a Microsoft Surface is nothing to sneeze at! 
  1. Step One: Follow both of Binary Tree’s Exchange MVP/MCM’s on Twitter between May 1 and May 6th:
    1. @ntexcellence
    2. @GS_MCM
  2. Step Two: Stop by the Binary Tree Booth (#573) and opt to have your badge scanned.  That’s it, you’re entered!!!
    1. If your twitter account does not match your name, or your company name, please let us know when you stop by the booth.  We’ll be matching winning Twitter accounts to booth visitors and we want to ensure you get your prize!

Prizes: One brand new unopened Microsoft Surface Pro 3 (256GB/8GB/i5) with an accompanying black Type Cover. 

Selection and Notification of WinnerEntry closes at 12PM local time at Ignite on Wednesday May 6th, 2015. The first eligible winner will be announced via Twitter by 1PM local time on Wednesday May 6th. The winner will have until 10:30AM on Thursday May 7th to come to the Binary Tree booth to claim the prize.  

We really want to give this away, so unfortunately no extensions to this time can be granted. Should the winner not come to the Binary Tree booth by 10:30AM, a runner up will be announced at 10:30AM on Thursday May 7th.

NOTE: All times mentioned in the giveaway are local to Chicago where Microsoft Ignite is being held.

Conditions: You know, the fine print stuff!
You must follow all of the following accounts on Twitter at the time of the drawing to be eligible:  
  • @ntexcellence
  • @GS_MCM 

You must visit the Binary Tree Booth (#573) and have your badge scanned to complete your entry. Winners will be chosen by a random out of the pool of all new Twitter followers for:
  • @ntexcellence
  • @GS_MCM 
The winner will be verified against attendees that have visited the Binary Tree booth and opted to have their badge scanned.  
Once the winner has been verified, a Tweet will be sent out announcing the winner. No alternate methods will be used for notifying the winner.
  • Binary Tree employees and their families are not eligible to enter.
  • Gary Steere and Justin Harris reserve the right to enter, but then we would need to cut the Surface Pro in half.  Our research leads us to believe that ½ of a Surface Pro may not function as well as the whole Surface Pro.  And, we’re not certain that one can follow himself on Twitter. As such, we also reserve the right to disqualify entries from ourselves if chosen.
  • If your twitter account does not match your name, or your company name, please let us know when you stop by the booth.  We’ll be matching winning Twitter accounts to booth visitors and we want to ensure you get your prize!
  • Twitter follows must be made between May 1 and the time of the drawing on May 6th to be eligible.  
  • Visits to the Binary Tree both must be made prior to the time of the drawing on May 6th to be eligible.  
  • You must be following @ntexcellence, @GS_MCM and @BinaryTreeInc at the time of the drawing to be eligible.
  • Twitter accounts must be at least 15 days old.
  • Gary Steere, Justin Harris, Binary Tree, Inc. and its employees are not responsible for typographical errors.
  • Justin Harris, Gary Steere, Binary Tree, Inc. and its employees are not responsible for electronic failures, including but not limited to, failure of the Twitter network, failure of wireless networks at Ignite or failure of cellular networks.
  • You must be 18 years of age to enter
  • Multiple entries via Twitter by the same individual will disqualify that individual.
  • Decisions on time of day and accuracy of your Apple Watch vs. our Microsoft Band’s time are final and may not be appealed or disputed.
Participants in this contest agree that we may use this promotion for publicity or advertising and any other marketing purpose. We may use the name, likeness of winners as part of this promotion. 

Friday, April 24, 2015

Did You Know: Customizable O365 Send Receive Limits. There is More to It! #MSExchange #Office365

Last week on April 15th, Microsoft announced a change to increase the allowed max message size to 150 MB. This means that Office 365 administrators (with global admin privileges) can customize the current maximum message sizes that can be sent and received from Exchange Online from 25 MB all the way up to 150 MB! 
Office 365 administrators can change the SendMaxSize and ReceiveMaxSize parameters on mailbox objects. This means that larger messages can be sent and received using the MAPI protocol (Outlook).

The Problem Statement
Many people that I’ve talked to or articles that I have read are strictly focusing on the use cases for sending 150 MB messages. Yes, I agree that the old Exchange administrator in me winces when I think about end users sending 150 MB files through an Exchange system. But in reality, I do not think this was the sole driving force behind Microsoft making this change. 
A lot of people are missing the beauty of this change! Here is what I mean.
A lot of customers that I speak with on a daily basis are looking to migrate to Office 365 from a third-party hosting provider. In these types of environments the hosting provider typically does not provide the rights required to migrate to Office 365 with the New-MoveRequest PowerShell cmdlet. This means that EWS or MAPI are often used as the transport mechanism to copy user mailbox data from the source hosted environment to Office 365. 
The problem with this ‘copy’ based migration method is that the largest email message that could be moved into Exchange Online was 25 MB. If you tried to move items larger than 25 MB into Office 365 those messages would be rejected. 
As you can imagine – it is not hard to find users that have attachments in their mailbox that are larger than 25 MB! Think about all those large PowerPoint slide decks, spreadsheets and videos that you have in “your” mailbox today!

The Solution
This new change can be completed within your O365 tenant on an organization wide or even on a per user basis as you can see below:
Change the mailbox plan:
Set-MailboxPlan ExchangeOnlineEnterprise-c7c130d6-15d9-4b85-9723-450db9d42aae -MaxSendSize 150MB -MaxReceiveSize 150MB

Change for all existing mailboxes:
Get-Mailbox -Resultsize Unlimited | Set-Mailbox -MaxSendSize 150MB -MaxReceiveSize 150MB 

You will see in the screenshot below that I was able to send a 95MB attachment (using Outlook) from one O365 tenant to another O365 tenant. Now, I will point out that I increased the send and receive sizes in both environments prior to testing. 


Now you do have to keep in mind that these max send and receive limits are applied differently depending on the mail client that you choose to use. For instance, with Outlook you can send a single 100MB attachment if your O365 send limits are set properly. 

However, with Outlook Web App (OWA) the original 25MB limits for attachments are still enforced on each single file. The overall send size though is still applicable. This means that you can send multiple attachments within OWA that equal up to your total send size. Each individual attachment though still is required to be under 25MB. This is an important distinction to make. 


I feel like this is a great change for those customers that have been looking to move to O365 but are currently utilizing a third-party hosting provider for their email. Before this change these customers would have to remove any attachments over 25MB or make specific plans for them before using EWS or MAPI to copy the mailbox data from the source environment to their new O365 tenant. I feel like this is a terrible change if customers simply want to send 150MB files. Shudder. That goes against everything us Exchange administrators have been preaching and educating the business about for years. 

I realize though that moderation in all things is practical. Now having the ability to set higher limits for those users that work with large files on a day-to-day basis is a great feature and allows them to stay productive. Being able to set large message limits during a migration into O365 is an amazing change and excites me. Let the MAPI migrations begin! 

Wednesday, April 15, 2015

How To: Quickly Spot Pesky Exchange 2013 Performance Issues #MSExchange

Have you ever wondered if there was a quick way to take a look at your Exchange 2013 environment and see potential performance problems outlined in red? Well, quick post to let the community know about a great new Exchange 2013 script that has been published to help quickly spot potential items that could get in the way of an otherwise smoothly operating Exchange 2013 environment.

This is a script that Marc Nivens wrote and uploaded recently to the TechNet script gallery.  The Exchange 2013 Performance Health Checker script when executed will go out and check common configuration settings that are known to cause performance issues. These performance issues are already referenced in the Exchange 2013 sizing recommendations but we all know how well people proactively read material like this when our messaging environments are running smoothly. I know it’s not just me!

The value in this script is that the output displays what is a cause for concern in a nice red color. This is a clear signal that serves to quickly point out what we need to focus our attention on. 

I quickly ran this script against one of my lab Exchange 2013 servers and the results are below.

  • The output shows in yellow that the machine has been identified as a virtual machine (Hyper-V) and that I should check several items to ensure that my virtualization configuration is in line with best practice. The URL to the virtualization recommendations on TechNet was provided. Nice touch. 
  • The script caught that my pagefile settings are not set for optimal Exchange 2013 performance. The system was set to automatically manage the pagefile instead of manually configuring the value. You can see this clearly spelled out in red. 
  • My power plan was not set to high performance. Again, spelled out in red.
Items Reported On
Exchange Build
Physical/Virtual Machine
Server Manufacturer and Model (physical hardware only)
VM host processor/memory configuration recommendations
Exchange server roles
Pagefile Size
Power Settings
.NET Framework version
Network card name and speed
Network card driver date and version (Windows 2012 and Windows 2012 R2 only)
RSS enabled (Windows 2012 and Windows 2012 R2 only)
Physical Memory amount
Processor Model
Number of processors, cores, and core speed
Hyper-threading enabled/disabled
Processor speed being throttled
Current list of active/passive databases and mailboxes (optional)

The script introduces a clear value in the ability to quickly run against a single or group of Exchange 2013 servers and verify items like the pagefile settings and .Net Framework versions. I know that in large environments variations in the .Net Framework versions can cause headaches and this script is a great way to quickly spot them!

The HealthChecker.ps1 script is another great addition to the Microsoft Script Center website and I recommend that you try it out in your own environment.

After all, what configurations could be lurking that may be preventing your Exchange 2013 systems from running optimally?

Tuesday, April 7, 2015

Did You Know: Repliability Problems are Real! Still! Part 2 #MSExchange

Last week I introduced that the problem with repliability still exists today after all these years. We discussed the history of the X.500 specification and how it relates to Exchange. Today we are going to address the obstacles with names in Exchange today.

Obstacles with Names in Exchange Today:
Over the years, SMTP has been standardized as the protocol that is used when sending email over the Internet. While this holds true even for Exchange 2013, many assume that SMTP is used to process messages that are sent and received within an Exchange organization. Unfortunately, many have learned the hard way that this is not how Exchange actually works in the real world. When Exchange needs to process and send a message to a user from the same organization, the X.500 address is read from the recipient in Active Directory.

Remember from our history lesson that the X.500 specification was created because information about individuals such as the surname, given name, or address could be easily stored and then retrieved. In this case, Exchange uses the X.500 address, which is stored in the user's legacyExchangeDN Active Directory attribute.

This means all mail objects within Active Directory have an assigned unique X.500 attribute stamped on the account. The legacyExchangeDN value is stamped on the user account when the Exchange mailbox is first created. So if a specific Active Directory users does not have a properly filled out legacyExchangeDN attribute, then Exchange will not be able to deliver email to that user internally.

Microsoft saw an opportunity to speed up the name resolution process to help mail clients obtain the proper name for a recipient. To help Microsoft Outlook speed up the resolution of previously used X.500 addresses, a caching file was created. Starting with Outlook 2003, this cache file (OutlookProfileName.nk2) builds a list of names based on actual user activity. This AutoComplete functionality within Microsoft Outlook will then suggest previously used names and email addresses when sending mail based on the first couple of characters you enter in the To: field. Unfortunately, Microsoft Outlook does not provide a native method to edit the nk2 file. So once a recipient’s X.500 address is saved in the *.nk2 file there is not an easy way to remove it.

Cross-forest mailbox migrations typically present numerous obstacles with the “replyability” of existing messages in the user mailbox. During a cross-forest migration, a new mailbox for each user is created in the target forest. This means that a new and unique legacyExchangeDN value is stamped on the user account in the target forest when the Exchange mailbox is first created. The new legacyExchangeDN value in the target forest will not match the existing legacyExchangeDN value for the user in the source forest. This means that replying to old emails in a users’ inbox will produce an NDR, as the message cannot be routed correctly. The old legacyExchangeDN value does not exist in the target forest so Exchange cannot route internally. This is like trying to resolve a DNS name when the zone does not contain the appropriate A record.

During cross-forest migrations there are several methods that can be used to help Exchange in the target forest understand the legacyExchangeDN values from the source forest.
1.     Use the CSVDE command to dump the alias and legacyExchangeDN values from the source forest and import them into the target forest.
2.     Utilize the manual prepare mailbox scripts from Microsoft so the native move method can be used.
3.     Utilize third-party tools to stamp the legacyExchangeDN value from the source forest on the user located in the target forest.

Next, I am going to examine the different ways to verify what the legacyExchangeDN Active Directory attributes for all our mailboxes actually are. 

The good news is that there are a lot of different methods and tools to accomplish this!