Site Mailboxes in Exchange 2013 & SharePoint 2013 – Part 1

Site Mailboxes in SharePoint & Exchange 2013

In this multi-part post, I will be exploring and configuring a new and very promising feature that comes with the Wave15 release of the 2013 suite of collaboration products which are Site Mailboxes. Site Mailboxes bring together email and SharePoint document libraries / team sites to enhance collaboration of documents and communications in the enterprise. Traditionally, users in the enterprise collaborate via two mediums, documents and email.  While both SharePoint and Exchange/Outlook are great products on their own, this approach requires the user to leverage multiple client connectivity methods such as the browser and/or Outlook/OWA, or even a combination of the two. Site Mailboxes attempt to change how users collaborate by bringing these two back end systems together in the same client interface, Outlook.  For more information on how Site Mailboxes are used in the new Office, please see an excellent overview here.

In this post we’ll explore the requirements to bring this new feature to use for users and the requirements associated with both Exchange and SharePoint 2013. To start, lets take a look at the requirements on the SharePoint side of things. I will be following the official documentation on TechNet which can be found here.  We will step through each one of the requirements in more detail as we progress through the series of posts.

SharePoint 2013 Requirements Overview:

  1. Must be a member of the SharePoint administrators group.
  2. Site Mailboxes require Exchange 2013.
  3. EWS API installed on SharePoint WFE should be version 15.0.516.25 or above. More information on how to confirm is located here.
  4. User Profile Synchronization must be configured in the SharePoint 2013 farm.
  5. App Management Service Application be configured in the SharePoint 2013 farm.
  6. SSL configured for the Default Zone in web applications that are setup in a server to server authentication scenario.

Site Mailboxes require Exchange 2013

This should be a very obvious prerequisite and does not require any more detail. Exchange 2013 is required for this integration to work.

Install EWS API on SharePoint 2013 WFE server(s)

The first step in configuring integration between SharePoint 2013 and Exchange 2013 is to install Exchange Web Services API on the SharePoint 2013 web front end server. To do this, navigate to the following url and download the EWSManagedAPI.msi.

Once downloaded, open a command prompt with elevated privileges and execute the following command after navigating to the directory where the EWSManagedAPI.msi file was saved. In my example, it is saved to C:installs

msiexec /i EwsManagedApi.msi addlocal=”ExchangeWebServicesApi_Feature,ExchangeWebServicesApi_Gac”


The installer will launch and present the below welcome screen, click “Next” to proceed.


Select the “I Accept the terms in the License Agreement” radio button and click next to continue.


Modify the installation folder if necessary and click “Next” to proceed.


Finally, click “Next” to confirm the installation and to begin the install process.


After the installation is complete (should be very quick), click “Close” to complete the installation process.


In Part 2. We will take a look at how to configure the User Profile Synchronization service in SharePoint 2013 to synchronize with Active Directory Domain Services as its source directory.

Grant user local logon rights via group policy.

To grant a user local logon rights for various different services, first launch the group policy management editor and right click to edit the Default Domain Controllers Policy as shown below.


Expand the computer configuration tree as shown below to expose the “Allow logon locally” policy.


Double-click “Allow logon locally” to open the properties and add the user in scope. Image shown below.


Click apply and ok, then close all windows to apply the policy changes. Replication may take some time, however the user and/or group added should now be granted the right to log on locally when replication is complete.

How to migrate a vSphere 5.0 virtual machine to Hyper-V 2012 with System Center 2012 SP1

Now that Hyper-V 3.0/Windows Server 2012  is released and System Center 2012 SP1 is right around the corner I decided to put myself in the position of many organizations out there today that may have an extensive VMware environment but might be looking to move some of the virtual servers over to the Hyper-V platform for a various number of reasons, but how do I accomplish this task? Let’s run through it……

My configuration..

1. VMware vCenter Server 5.0

2. VMware vSphere ESXi 5.0 hosts

3. Server 2012 Hyper-v 3.0 cluster

4 System Center Virtual Machine Manager 2012 Sp1 (beta)

So first and foremost, SCVMM2012 SP1 (beta) is required to support my 2012 hyper-v hosts. Outside of that requirement, the same should still apply to non-sp1 environments that do not include the following products…

1. Windows Server 2012

2. vCenter 5.X / ESXi 5.X

At this point I have installed and configured my hyper-v 3.0 cluster, SCVMM 2012 SP1 Beta, vCenter 5.0, and ESXi 5.0 hosts. I also have a virtual server running on ESXi 5.0 that I want to move over to my newly formed hyper-v / system center environment. The virtual server is shown below running in a VMware environment and is labeled with “scom-1″ at the end of its server name.

For the sake of understating what has already been completed and what is required to perform this operation, we’ll jump over to the MS SCVMM platform and lay out what has been configured prior to converting the virtual server from VMware to Hyper-V.

1. I have already added my vCenter 5.0 server into the SCVMM 2012 Sp1 Beta environment.

2. After I have added my vCenter 5.0 server, I’m able to add VMware hosts and clusters that may be managed by that vCenter instance in SCVMM2012. After adding the hosts, I can see the hosts and VMs in the views shown below.

At this point my SCVMM 2012 SP1 Beta machine is able to see the virtual server running on ESXi via the vCenter APIs which will allow me to “convert the virtual machine”.  How do we do this?

1. The virtual server running in VMware must first have its “VMware Tools” removed from the running virtual guest operating system. In the “Programs and Features” section it will be displayed as shown below.

Once removed and the virtual server has been restarted. Shut down the virtual machine so it resides in a “powered off” state in VMware vCenter. The image below shows the “-scom-1″ virtual sever powered off.

In SCVMM 2012 we will select “Create Virtual Machine” and then “Convert Virtual Machine” from the list of options.

The convert virtual machine wizard will open and prompt for the source virtual machine. Clicking browse will allow for me to select the scom-1 virtual server that is running on the esxi host machine.

Proceeding through the wizard I am then able to give the machine a name and description.

Clicking next will bring me to the VM configuration page where I have the ability to modify the number of cpus and amount of memory that is currently configured for the virtual server.

The wizard will then bring me to the host selection page and allow for me to pick a destination host in my hyper-v cluster.

Next requirement and page in the wizard will allow for me to pick the destination path of where I want the virtual machine files to reside. I’ve selected a cluster shared volume named “Volume 1″

I am then able to set my virtual network that I want this machine to use on the destination host.

Lastly I am able to add some additional settings for the guest virtual server to determine automatic actions in the event of a host failure.

The summary page is the final section of the convert virtual machine wizard and once all settings have been reviewed, clicking “Create” will start the process.

During conversion of the virtual server, the process can be monitored via the jobs pane in the SCVMM 2012 as shown below.

Once completed, the migrated virtual machine will be placed on the destination host and running on Hyper-V as shown in the below screen shot.

Exchange 2013 Tech Preview – Public Folders

In the past two version of Exchange, public folders were thought to be excluded from release bits but this time around it not only seems as though they are
here to stay but changes have been made to not only the structure but also enhancements to high availability. In both Exchange 2007 and Exchange 2010, public folders were available for use, but were administered via a separate management tool in the toolbox and not a participant of new clustering technologies including both high availability solutions in 2007 and 2010. At first glance, it looks like public folders are now stored inside of the same mailbox database that stores user data in what Microsoft is calling a “specially designed mailboxes to store both the hierarchy and the public folder content”. This limited information can be found here in the sharing and collaboration section.

Onto setup and configuration of this specially designed mailbox for public folders in Exchange 2013. In the EAC, you’ll notice on the left hand side that public folder administration is now integrated into the admin console (EAC).

It’s also worth noting that by default when logged into OWA as an end-user there is no public folder section. No public folder mailbox has been created at this point so we’ll check back after creation of the public folder mailbox and we add some content.

Selecting public folders in the EAC for the first time produces the below error.

After clicking ok to the error and selecting public folder mailboxes in the admin center, Exchange tells us that every public folder must reside in a public folder mailbox and before we can create a new public folder, we’ll need to provision a new public folder mailbox.

Clicking the plus in the admin center presents us with a dialog box asking for a name, organizational unit and mailbox database.

I have chosen to name the public folder mailbox pf_mbx and have placed it in the public folder mailboxes organizational unit that was created in Active Directory Domain Services and in the DB1 database that has been created in Exchange 2013.

After clicking, “save”, which is a change from “ok” or “apply”, the public folder mailbox is created and shown in the list.

Now selecting “Public Folders” in the EAC and clicking the plus we can create a new public folder. For the sake of testing I will name this public folder Test1 and leave the default path top level path of . After selecting save the new public folder is created in Exchange 2013 with the details provided below.

But now can we see this public folder in OWA? A refresh in OWA does not show any public folder access nor and the default OWA policy enables public folder access so what gives? Sign out and in to OWA also shows nothing new so lets mail enable the new public folder and try to email a message to it. To do this, we will head back to public folders in the EAC and select the Test1 folder. On the right hand side of the EAC in the details section we can select “enable”, under the mail settings section which defaults to disabled.

A warning appears confirming that we are certain we want to mail enable this public folder. Select “yes” to proceed.

After the public folder has been mail enabled, a test message was drafted in OWA and the Test1 address was pulled from the GAL and validated in OWA as show below.

Successfully deliver to the public folder takes place so we know that the public folder mailbox and public folder creation process worked as designed but how to view the public folder and content via OWA? OWA options shows no enable or disable check box for public folder viewing.

For now it looks as though there is no public folder access via the OWA client. I’ll revisit this when I stand up a Windows 8 client OS with the 2013 Outlook Tech Preview. Until then…

Exchange 2013 Tech Preview – New Administration Console

Say goodbye to the EMC and get familiar with the Exchange Admin Center(EAC).

After Exchange 2013 has been successfully installed you might try to access the Exchange Management Console from Exchange 2007/2010 like I did, but you will not find it. Instead, you will have to access the new Exchange Admin Center that seems to have been built on the logic behind the ECP in Exchange 2010.

To access the EAC (Exchange Admin Center), launch IE and navigate to https://localhost/ecp.. screen shot below

Upon successful login to the EAC, you are presented with new administrative interface which is very different from pervious versions of Exchange.

The EAC defaults to the recipients view after login and provides access to various different recipients types. Navigating through the Admin center seems to take on the same flow
as the ECP did by starting on the left to select the function or area of Exchange to manage, Then the sub menu for the function or area select, and finally the action or tasks to invoke against the selected object. The below image shows this with some very ugly red boxes.

Another thing that is very noticeable is the promotion of Office365 and Hybrid deployments in the new admin center. Both the very top navigation bar and the bottom of the navigation pane on the left  show built-in integration for cloud based and/or hybrid deployments.

While selecting the “Hybrid” option in the left pane brings me to a Setup page where I can enable and configure my organization across both on-prem and O365, selecting “OFFICE 365″ from the top navigation bar brings me to a help link on technet which has not yet been populated with data.

Lastly, one last item that jumps out is the built in integration for public folder management in the admin center rather than a separate console being part of the Exchange Toolbox as was the case in 2007/2010 Exchange environments.

Right away we notice the two items that we can perform administrative task on being

  1. Public Folders
  2. Public Folder Mailboxes

Microsoft has changed the way public folder work in 2013 to store content and hierarchy in something called a public folder mailbox. Not quite sure what that means yet but it does allow for public folders to leverage the built-in HA and resiliency that mailbox databases benefit from. Almost as if the public folders are now stored in a mailbox designed for folders and content and also not stored in a seperate database but contained in the same database as user mailboxes. I will be digging into this in more detail in another post but for now it doesn’t look like public folders are going away in this release of Exchange.

If you’re wondering, yes the Exchange Toolbox does still exist and contains the tools shown in the below screenshot. Where’s the ExBPA?

In a series of articles I will be digging into each area of Exchange 2013 to see how it differs from its predecessor. Oh and one more thing, right-click doesn’t seem
to work anywhere in the EAC….

What are your thoughts?

Exchange Server 2013 Preview Released

Microsoft has just released the next wave of collaboration tools as preview versions of the software below are a few links for more information. – Exchange 2013 Preview Information – Exchange 2013 Preview Information – Lync 2013 Preview Information – SharePoint 2013 Preview Information – Office 2013 Preview

Part 1. Server 8 Beta Hyper-V Cluster Build

Below are the steps taken to build a Windows Server 8 Beta (build 8250) failover cluster for hyper-v virtualization.  My servers have already had the following performed:

  • Joined to domain
  • NIC Team configured with default settings
  • Hyper-v role installed
  • Failover Clustering feature has been installed on each of the two nodes
  • Virtual Switch created and setup to tag management traffic with the proper vlanID (only 2 nics available in lab)
  • Storage has been provisioned and zoned to each of the two clustered nodes.

Upon launching the Failover Cluster Manager tool, the familiar FCM console appears. To create a new cluster, select “Create Cluster…” in the actions pane.

The yet again familiar “Create Cluster Wizard” appears allowing us to build our new cluster. Select Next to continue.

In the “Select Servers” screen of the wizard, add the two (or more) nodes that will make up the cluster and select “Next” to proceed.

It’s always a good idea to ensure the cluster can be supported by Microsoft and validation of the cluster is one way to ensure everything required is in place. Leave the default of Yes selected and click “Next” to proceed to invoke the validation wizard.

It’s worth it to note that during my validation tests the “List All Running Processes” task took almost 10 minutes. Have patience and once completed select next to continue.

After the validation wizard has successfully been completed, we are taken to the “Access Point for Administering the Cluster” screen. This seems to be new to Server 8 but the end result is the same as in 2008/R2 clustering in that we are creating a cluster core object for managing the cluster. My settings are configured as below. An additional static IP is best for this cluster object.

Selecting “Next” will take the wizard to the final confirmation screen summarizing the cluster name and IP address and also the nodes that will be part of the new cluster. Note that I am also unchecking the box at the bottom which states to “Add all eligible storage to the cluster”. I have not come across this before and will test the purpose of this in a different environment with different storage.

After successful completion of the new cluster, the system will automatically connect to the new “Access point for Administration” or cluster object. It should display similar to the below screen shot.

In Part 2 of Server 8 Hyper-V Clustering. I will bring shared storage (CSV) into the newly created cluster for storage of highly available virtual machines.

How to build a Window Server 8 Nic Team

With the introduction of native nic team in Windows Server 8, Microsoft has removed vendor requirements in regard to creating nic teams. The below steps will show you how to create a nic team using the Windows Server 8 GUI.

1. Select “Local Server” from the “Server Manager Dashboard”

2. From the properties of the server, locate the network adapter teaming feature and select disabled to enable the feature.

3. After enabling the feature, the “Nic Teaming” window will open. Shift + Click each adapter that is to be added as a member of the team in the “Adapters and Interfaces” pane and right-click to select “Add to New Team”.

4. In the New Team window, specify a name for team and select any advanced properties that may be relevant to the specific configuration in place. An excerpt from Microsoft’s “WS8 Beta Networking White Paper” explains the advanced properties below the New Team image.

NIC Teaming uses two sets of configuration algorithms:

Switch-independent modes. These algorithms make it possible for team members to connect to different switches because the switch doesn’t know that the interface is part of a team at the host. These modes do not require the switch to participate in the teaming.

Switch-dependent modes. These algorithms require the switch to participate in the teaming. Here, all interfaces of the team are connected to the same switch.

There are two common choices for switch-dependent modes of NIC Teaming:

Generic or static teaming (IEEE 802.3ad draft v1). This mode requires configuration on both the switch and host to identify which links form the team. Because this is a statically configured solution, there is no additional protocol to assist the switch and host to identify incorrectly plugged cables or other errors that could cause the team to fail to perform. Typically, this mode is supported by server-class switches.

Dynamic teaming (IEEE 802.1ax, Link Aggregation Control Protocol [LACP]). This mode is also commonly referred to as IEEE 802.3ad because it was developed in the IEEE 802.3ad committee before being published as IEEE 802.1ax. It works by using the LACP to dynamically identify links that are connected between the host and a specific switch. Typical server-class switches support IEEE 802.1ax, but most require administration to enable LACP on the port. There are security challenges to allowing completely dynamic IEEE 802.1ax to operate on a switch. As a result, switches today still require the switch administrator to configure the switch ports that are allowed to be members of such a team. Either of these switch-dependent modes results in inbound and outbound traffic that approach the practical limits of the aggregated bandwidth. This is because the team’s pool of links is seen as a single pipe.

NIC Teaming is compatible with all networking capabilities in Windows Server “8” Beta, except for the following three exceptions:
• Remote Direct Memory Access (RDMA).
• TCP Chimney Offload.
For SR-IOV and RDMA, data is delivered directly to the network adapter without passing it through the networking stack. Therefore, the network adapter team can’t see or redirect the data to another path in the team. In this release, TCP Chimney Offload is not supported with NIC Teaming.

The NIC Teaming feature requires the following:
• Windows Server “8” Beta.
• At least one network adapter
o If there are two or more network adapters they should be of the same speed.
o Two or more network adapters are required if you are seeking bandwidth aggregation or failover protection.
o One or more network adapters if you are only seeking VLAN segregation for the network stack.

NIC Teaming allows you to perform three important tasks:
• You can use it to aggregate bandwidth.
• You can use it to prevent connectivity loss in case of a network component failure.
You can use it to do VLAN segregation of traffic from a host.
It works with both physical servers and virtual machines. You now have the option to use this feature and receive full support for your configuration from Microsoft, regardless of your network adapter vendor.

Windows 8 NIC Teaming is managed with Windows PowerShell and through the NIC Teaming configuration UI. Both the PowerShell and the UI work for both physical and virtual servers. The UI can manage physical and virtual servers at the same time.

5. By clicking the “Team 1 – Default” link we can specify the default interface for the new nic team.

6. For my configuration, I will be leaving the primary nic team defaults in place. For those interested in the additional configuration, the below screenshot displays the available settings.

7.  After clicking OK to all windows related to creating the new nic team, you will see a series of nic states while the server is building the team. Below are the three status’ you may see while the team is being created.

BlockReplication Event Log Warning ID 245 – Should I be concerned?

I noticed a fair amount of alerts in one of the Exchange 2010 SP1 environments I have access to related to block mode replication warnings. The event source is HighAvailability and the event ID is 245. The event type is “Warning” and the events are located inside the crimson channel on an Exchange 2010 SP1 mailbox server that is a member of a DAG. Below is the location of the event log.

This event shoud not show up on a pre-SP1 mailbox server as “continuous replication – block mode” was not introduced until SP1 for Exchange Server 2010. For more information related to block mode replication and what is does, please go here.

First I wanted to verify whether or not the Exchange 2010 SP1 dag is operating in block mode. I merely was curious as to what the current state of the replication was as Exchange will switch between the two continuous replication models on its own.

For information on how to verify this, see here.

What is the error?

Source: HighAvailability    EventID: 245   Type:Warning
General Information Tab:
Block-mode replication for database copy ‘DatabaseServerName’ released a range of complete but unfinalized logs from generation 0x13b30b to 0x13b30b. The data is identical to the active copy but the file timestamps may differ by a small value

Thinking about how block mode continuous replication is invoked, we know that the passive copy of the Exchange database has caught up and is only relying on the EXX.chk file at this point to determine what has been commited to the in-memory copy of the edb file, written to the database on disk, and to determine what the active copy dag member needs to provide from a replication persepective. This warning seems to be due to either a *over event or something that would cause the system to close or write what is in memory to a .log file and timestamp it, but with a timestamp that is different from the source. My thought is that this warning is written to the block mode operation log when Exchange determines it needs to switch from block mode back to file mode for whatever reason.

It is the log copiers job to determine when to switch between continous replication file mode (CRFM) and continous replicatoin block mode (CRBM). If the log copier were to track via logging somewhere when it switches between the two modes, I could match up the event timestamps to see if my theory holds true.

Diagnostic Logging set to High on the MSExchangeRepl service did not show anything informative during the same time frame as the 245 warnings other than database redundancy health check scheduled tasks and log truncation due to backups as expected. This is only a hunch that I will try to validate but in my experiences so far with Exchange 2010 SP1 and DAG, this warning can be safely ignored. If anyone reads this and can shed some light, please let me know. I will continue to dig as well!

How to verify if an Exchange 2010 DAG is running in continuous replication block mode (CRBM)

To verify whether or not the Exchange 2010 SP1 dag is operating in block mode, Run the cmdlet shown below.

Get-Counter -ComputerName nameofdagmember -Counter “MSExchange Replication(*)Continuous replication – block mode Active”

This will actually query a performance counter metric available on 2010 SP1 mailbox servers and return a 1 if block mode is active for the particular database instance. The (*) in the cmdlet tells it to run the counter against all known instances. If successful, the shell will return something similar to the below screenshot. I have highlighted where to look for the 1 to determine if block mode is in fact in effect.