Get-ADobject Counts

Quick helpful PS to grab some AD DS info and dump to xlsx. Might come in handy down the road. Requires excel installed on workstation and winrm enabled on domain controller specified in same AD DS domain.

# Prompt end user for domain controller to use for AD DS queries (netbios name will suffice as long as on same domain)
$domaincontroller = Read-Host "Please specify the domain controller to use"

# Start remote PowerShell session with AD DS domain controller specified
Enter-PSSession -ComputerName $domaincontroller

# Create directory off of C: to store resulting xlsx file
New-Item C:\temp\ADDS_Export\ -ItemType Directory

# Declare variables and query AD DS for AD object counts
$aduser = (Get-ADUser -Filter * -ResultSetSize $null | Measure-Object).Count
$adcomputer = (Get-ADComputer -Filter {OperatingSystem -notlike "*Server*"} -ResultSetSize $null | Measure-Object).Count
$adserver = (Get-ADComputer -Filter {OperatingSystem -like "*Server*"} -ResultSetSize $null | Measure-Object).Count
$adgroup = (Get-ADGroup -Filter * -ResultSetSize $null | Measure-Object).Count

# Exit Remote PowerShell session

#Create new Excel object and populate data (requires Excel to be installed on workstation)
$objExcel = New-Object -ComObject Excel.Application
$objExcel.Visible = $false
$objExcel.SheetsInNewWorkbook = 1
$Workbook01 = $objExcel.Workbooks.Add()
$Worksheet01 = $Workbook01.Sheets.Item(1)
$row = 1
$column = 1
$Worksheet01.Cells.Item($row,$column)="User Count"
$column2 = 2
$Worksheet01.Cells.Item($row,$column2)="Computer Count"
$column3 = 3
$Worksheet01.Cells.Item($row,$column3)="Group Count"
$column4 = 4
$Worksheet01.Cells.Item($row,$column4)="Server Count"

Exchange Server 2013 Preview Released

Microsoft has just released the next wave of collaboration tools as preview versions of the software below are a few links for more information. – Exchange 2013 Preview Information – Exchange 2013 Preview Information – Lync 2013 Preview Information – SharePoint 2013 Preview Information – Office 2013 Preview

Part 1. Server 8 Beta Hyper-V Cluster Build

Below are the steps taken to build a Windows Server 8 Beta (build 8250) failover cluster for hyper-v virtualization.  My servers have already had the following performed:

  • Joined to domain
  • NIC Team configured with default settings
  • Hyper-v role installed
  • Failover Clustering feature has been installed on each of the two nodes
  • Virtual Switch created and setup to tag management traffic with the proper vlanID (only 2 nics available in lab)
  • Storage has been provisioned and zoned to each of the two clustered nodes.

Upon launching the Failover Cluster Manager tool, the familiar FCM console appears. To create a new cluster, select “Create Cluster…” in the actions pane.

The yet again familiar “Create Cluster Wizard” appears allowing us to build our new cluster. Select Next to continue.

In the “Select Servers” screen of the wizard, add the two (or more) nodes that will make up the cluster and select “Next” to proceed.

It’s always a good idea to ensure the cluster can be supported by Microsoft and validation of the cluster is one way to ensure everything required is in place. Leave the default of Yes selected and click “Next” to proceed to invoke the validation wizard.

It’s worth it to note that during my validation tests the “List All Running Processes” task took almost 10 minutes. Have patience and once completed select next to continue.

After the validation wizard has successfully been completed, we are taken to the “Access Point for Administering the Cluster” screen. This seems to be new to Server 8 but the end result is the same as in 2008/R2 clustering in that we are creating a cluster core object for managing the cluster. My settings are configured as below. An additional static IP is best for this cluster object.

Selecting “Next” will take the wizard to the final confirmation screen summarizing the cluster name and IP address and also the nodes that will be part of the new cluster. Note that I am also unchecking the box at the bottom which states to “Add all eligible storage to the cluster”. I have not come across this before and will test the purpose of this in a different environment with different storage.

After successful completion of the new cluster, the system will automatically connect to the new “Access point for Administration” or cluster object. It should display similar to the below screen shot.

In Part 2 of Server 8 Hyper-V Clustering. I will bring shared storage (CSV) into the newly created cluster for storage of highly available virtual machines.

How to build a Window Server 8 Nic Team

With the introduction of native nic team in Windows Server 8, Microsoft has removed vendor requirements in regard to creating nic teams. The below steps will show you how to create a nic team using the Windows Server 8 GUI.

1. Select “Local Server” from the “Server Manager Dashboard”

2. From the properties of the server, locate the network adapter teaming feature and select disabled to enable the feature.

3. After enabling the feature, the “Nic Teaming” window will open. Shift + Click each adapter that is to be added as a member of the team in the “Adapters and Interfaces” pane and right-click to select “Add to New Team”.

4. In the New Team window, specify a name for team and select any advanced properties that may be relevant to the specific configuration in place. An excerpt from Microsoft’s “WS8 Beta Networking White Paper” explains the advanced properties below the New Team image.

NIC Teaming uses two sets of configuration algorithms:

Switch-independent modes. These algorithms make it possible for team members to connect to different switches because the switch doesn’t know that the interface is part of a team at the host. These modes do not require the switch to participate in the teaming.

Switch-dependent modes. These algorithms require the switch to participate in the teaming. Here, all interfaces of the team are connected to the same switch.

There are two common choices for switch-dependent modes of NIC Teaming:

Generic or static teaming (IEEE 802.3ad draft v1). This mode requires configuration on both the switch and host to identify which links form the team. Because this is a statically configured solution, there is no additional protocol to assist the switch and host to identify incorrectly plugged cables or other errors that could cause the team to fail to perform. Typically, this mode is supported by server-class switches.

Dynamic teaming (IEEE 802.1ax, Link Aggregation Control Protocol [LACP]). This mode is also commonly referred to as IEEE 802.3ad because it was developed in the IEEE 802.3ad committee before being published as IEEE 802.1ax. It works by using the LACP to dynamically identify links that are connected between the host and a specific switch. Typical server-class switches support IEEE 802.1ax, but most require administration to enable LACP on the port. There are security challenges to allowing completely dynamic IEEE 802.1ax to operate on a switch. As a result, switches today still require the switch administrator to configure the switch ports that are allowed to be members of such a team. Either of these switch-dependent modes results in inbound and outbound traffic that approach the practical limits of the aggregated bandwidth. This is because the team’s pool of links is seen as a single pipe.

NIC Teaming is compatible with all networking capabilities in Windows Server “8” Beta, except for the following three exceptions:
• Remote Direct Memory Access (RDMA).
• TCP Chimney Offload.
For SR-IOV and RDMA, data is delivered directly to the network adapter without passing it through the networking stack. Therefore, the network adapter team can’t see or redirect the data to another path in the team. In this release, TCP Chimney Offload is not supported with NIC Teaming.

The NIC Teaming feature requires the following:
• Windows Server “8” Beta.
• At least one network adapter
o If there are two or more network adapters they should be of the same speed.
o Two or more network adapters are required if you are seeking bandwidth aggregation or failover protection.
o One or more network adapters if you are only seeking VLAN segregation for the network stack.

NIC Teaming allows you to perform three important tasks:
• You can use it to aggregate bandwidth.
• You can use it to prevent connectivity loss in case of a network component failure.
You can use it to do VLAN segregation of traffic from a host.
It works with both physical servers and virtual machines. You now have the option to use this feature and receive full support for your configuration from Microsoft, regardless of your network adapter vendor.

Windows 8 NIC Teaming is managed with Windows PowerShell and through the NIC Teaming configuration UI. Both the PowerShell and the UI work for both physical and virtual servers. The UI can manage physical and virtual servers at the same time.

5. By clicking the “Team 1 – Default” link we can specify the default interface for the new nic team.

6. For my configuration, I will be leaving the primary nic team defaults in place. For those interested in the additional configuration, the below screenshot displays the available settings.

7.  After clicking OK to all windows related to creating the new nic team, you will see a series of nic states while the server is building the team. Below are the three status’ you may see while the team is being created.