Archive for June, 2014

Powershell script to create Hyper-V Virtual Machines in bulk


8/14/2014:

This script has been deprecated.  It has been re-written and made part of the SBTools PS Module available here.

Sample script output:

vm07

This script can be used to create Hyper-V virtual machines in bulk. This may be useful in testing, benchmarking, or lab environments.

CVM1

This script can be downloaded from the Microsoft TechNet Gallery.


Revisions:

1.0 – 06/30/2014 – Script leaves log file in the folder where it runs
1.1 – 07/07/2014 – Added CSV log file showing disk copy duration and throughput
1.2 – 07/13/2014 – Added HRBytes function, minor cosmetic tweaks

 


Benchmarking Gridstore enterprise storage array (2)


This is another post in a series of posts in the process of performance testing and benchmarking Gridstore enterprise storage array.

Gridstore Array components:

6x H-Nodes. Each has 1x Xeon E5-2403 processor at 1.8 GHz with 4 cores (no hyper-threading) and 10 MB L3 cache, 32 GB DDR3 1333 MHz DIMM, 4x 3TB 7200 RPM SAS disks and a 550 GB PCIe Flash card.

GS-009k

Testing environment:

One compute node with 2x Xeon E5-2430L CPUs at 2 GHz with 6 cores each (12 Logical processors) and 15 MB L3 cache, 96 GB RAM

Pre-test network bandwidth verification:

Prior to testing array disk IO, I tested the availability of bandwidth on the Force 10 switch used. I used NTttcp Version 5.28 tool. One of the array nodes was the receiver:

GS-002

The HV-LAB-01 compute node was the sender:

GS-003

I configured the tool to use 4 processor cores only since the Gridstore storage nodes have only 4 cores.

The result was usable bandwidth of 8.951 Gbps (1,18.9 MB/s) – Testing was done using standard 1,500 MTU frames not 9,000 MTU jumbo frames.


vLUNs:

8 vLUNs were configured for this test.

GS-009l

Each vLUN is configured as follows:

  • Protect Level: 1 (striped across 3 Gridstore nodes, fault tolerant to survive single node failure)
  • Optimized for: IOPS
  • QoS: Platinum
  • Unmasked: to 1 server
  • File system: NTFS
  • Block size: 64 KB
  • Size: 5 TB (3 segments, 2.5 TB each)

This configuration utilizes each all 24 disks in this grid. (6 nodes * 4 disks each = 24 disks = 8 vLUNs * 3 disks each). It provides optimum array throughput.


Testing tool:

Intel’s IOMeter version 2006.07.27

24 workers, each configured to target all 8 vLUNs – 32 outstanding I/Os

GS-009m

IO profile: 50% read/50% write, 10% random, 8k alignment:

GS-009n

Test duration: 10 minutes


Test result:

IOMeter showed upwards of 17.7k IOPS:

GS-009a

Disk performance details on the compute node:

GS-009b

CPU performance details on the compute node:

GS-009c

Network performance on one of the storage nodes:

GS-009d

Disk performance on one of the storage nodes:

GS-009e

CPU performance on one of the storage nodes:

GS-009f

Overall summary performance on one of the storage nodes:

GS-009g

CPU utilization on the storage nodes as shown from the GridControl snap-in:

GS-009h

Bytes received:

GS-009i

Final test result:

GS-009j

and test details.


Conclusion and important points to note:

  • Network utilization maxed out beyond the 10 Gbps single NIC used on both the compute and storage nodes. This suggests that the array is likely to deliver more IOPS if more network bandwidth is available. Next test will use 2 teamed NICs on the compute node as well as 3 storage nodes with teamed 10 Gbps NICs as well.
  • CPU is maxed on the storage nodes during the test. Storage nodes have 4 cores. This suggests that CPU may be a bottleneck on storage nodes. It also leads me to believe that a) more processing power is needed on the storage nodes, and b) RDMA NICs are likely to enhance performance greatly. The Mellanox ConnectX-3 VPI dual port PCIe8 card may be just what the doctor ordered. In a perfect environment, I would have that coupled with the Mellanox Infiniband MSX6036F-1BRR 56Gbps Switch.
  • Disk IO performance on the storage nodes during the test showed about 240 MB/s data transfer, or about 60 MB/s per each of the disks in the node. This corresponds to the native IO performance of the SAS disks. This suggests minimal/negligible boost from the 550 GB PCIe flash card in the storage node.

 

 


Benchmarking Gridstore enterprise storage array (1)


Gridstore provides an alternative to traditional enterprise storage. Basic facts about Gridstore storage technology include:

  • It provides storage nodes implemented as 1 RU servers that function collectively as a single storage array.
  • Connectivity between the nodes and the storage consumers/compute nodes occurs over one or more 1 or 10 Gbps Ethernet connections.
  • NIC teaming can be setup on the Gridstore nodes to provide additional bandwidth and fault tolerance
  • It utilizes a virtual controller to present storage to Windows servers

IO testing tool and its settings is detailed in this post.

vLUNs can be easily created using the GridControl snap-in. This testing is done with a Gridstore array composed of 6 H-nodes. Click node details to see more.

Prior to testing array disk IO, I tested the availability of bandwidth on the Force 10 switch used. I used NTttcp Version 5.28 tool. One of the array nodes was the receiver:

GS-002

 

The HV-LAB-01 compute node was the sender:

GS-003

I configured the tool to use 4 processor cores only since the Gridstore storage nodes had only 4 cores.

The result was usable bandwidth of 8.951 Gbps (1,18.9 MB/s) – Testing was done using standard 1,500 MTU frames not 9,000 MTU jumbo frames.

Test details::

On the receiver Gridstore storage node:
C:\Support>ntttcp.exe -r -m 4,*,10.5.19.30 -rb 2M -a 16 -t 120
Copyright Version 5.28
Network activity progressing…
Thread Time(s) Throughput(KB/s) Avg B / Compl
====== ======= ================ =============
0 120.011 311727.158 60023.949
1 120.011 233765.293 53126.468
2 120.011 306670.676 56087.990
3 120.011 293592.705 52626.788
##### Totals: #####
Bytes(MEG) realtime(s) Avg Frame Size Throughput(MB/s)
================ =========== ============== ================
134280.569568 120.011 1457.709 1118.902
Throughput(Buffers/s) Cycles/Byte Buffers
===================== =========== =============
17902.435 3.864 2148489.113
DPCs(count/s) Pkts(num/DPC) Intr(count/s) Pkts(num/intr)
============= ============= =============== ==============
17388.114 46.288 26563.098 30.300
Packets Sent Packets Received Retransmits Errors Avg. CPU %
============ ================ =========== ====== ==========
4634562 96592255 599 0 62.960

On the sender compute node: HV-LAB-05
C:\Support>ntttcp.exe -s -m 4,*,10.5.19.30 -rb 2M -a 16 -t 120
Copyright Version 5.28
Network activity progressing…
Thread Time(s) Throughput(KB/s) Avg B / Compl
====== ======= ================ =============
0 120.003 311702.607 65536.000
1 120.003 233765.889 65536.000
2 120.003 306669.667 65536.000
3 120.003 293592.660 65536.000
##### Totals: #####
Bytes(MEG) realtime(s) Avg Frame Size Throughput(MB/s)
================ =========== ============== ================
134268.687500 120.004 1457.441 1118.868
Throughput(Buffers/s) Cycles/Byte Buffers
===================== =========== =============
17901.895 2.957 2148299.000
DPCs(count/s) Pkts(num/DPC) Intr(count/s) Pkts(num/intr)
============= ============= =============== ==============
25915.561 1.504 71032.291 0.549
Packets Sent Packets Received Retransmits Errors Avg. CPU %
============ ================ =========== ====== ==========
96601489 4677580 22698 1 7.228


Test 1:

Compute node(s): 1 physical machine with 2x Xeon E5-2430L CPUs at 2 GHz with 6 cores each (12 Logical processors) and 30 MB L3 cache, 96 GB RAM, 2x 10 Gbps NICs

GS-001

vLUN:

  • Protect Level: 0 (no fault tolerance, striped across 4 Gridstore nodes)
  • Optimized for: N/A
  • QoS: Platinum
  • Unmasked: to 1 server
  • File system: NTFS
  • Block size: 32 KB
  • Size: 2 TB (4 segments, 512 GB each)

GS-A05

Result:

Testing with 24 vCores, 10 Gbps NIC, 1 compute node, 32k block size, 50% read/50% write IO profile => 10.43k IOPS

GS-A01

 

In the above image you can see the read/write activity to the 4 nodes that make up this vLUN listed under Network Activity in the Resource Monitor/Network tab.

GS-A02

At the same time, the 4 nodes that make up this vLUN showed average CPU utilization around 40%. This dropped down to 0% right after the test.

GS-A03

The 4 nodes’ memory utilization averaged around 25% during the test. It’s baseline is 20%

GS-A04

 


Test 2: The same single compute node above

vLUN:

  • Protect Level: 1 (striped across 3 Gridstore nodes, fault tolerant to survive single node failure)
  • Optimized for: IOPS
  • QoS: Platinum
  • Unmasked: to 1 server
  • File system: NTFS
  • Block size: 32 KB
  • Size: 2 TB (3 segments, 1 TB each)

GS-B04

Result:

Testing with 24 vCores, 10 Gbps NIC, 1 compute node, 32k block size, 50% read/50% write IO profile => 11.32k IOPS

GS-B01

 

GS-B02

 

GS-B03

 


Test 3: The same single compute node above

vLUN:

  • Protect Level: 1 (striped across 5 Gridstore nodes, fault tolerant to survive single node failure)
  • Optimized for: Throughput
  • QoS: Platinum
  • Unmasked: to 1 server
  • File system: NTFS
  • Block size: 32 KB
  • Size: 2 TB (5 segments, 512 GB each)

GS-C01

Result:

Testing with 24 vCores, 10 Gbps NIC, 1 compute node, 32k block size, 50% read/50% write IO profile => 9.28k IOPS
GS-C02

GS-C03

 

GS-C04


Test 4: The same single compute node above

vLUN:

  • Protect Level: 2 (striped across 6 Gridstore nodes, fault tolerant to survive 2 simultaneous node failures)
  • Optimized for: Throughput
  • QoS: Platinum
  • Unmasked: to 1 server
  • File system: NTFS
  • Block size: 32 KB
  • Size: 2 TB (6 segments, 512 GB each)

GS-D01

Result:

Testing with 24 vCores, 10 Gbps NIC, 1 compute node, 32k block size, 50% read/50% write IO profile => 4.56k IOPS

GS-D02

GS-D03

GS-D04

 


Test 5: The same single compute node above

2 vLUNs:

1. The same Grid Protection Level 1 vLUN from test 1 above with Platinum QoS setting +

2. Identical 2nd vLUN except that QoS is set to Gold:

  • Protect Level: 1 (striped across 3 Gridstore nodes, fault tolerant to survive 1 node failure)
  • Optimized for: IOPS
  • QoS: Gold
  • Unmasked: to 1 server
  • File system: NTFS
  • Block size: 32 KB
  • Size: 2 TB (3 segments,  1 TB each)

GS-E01

Result:

Testing with 24 vCores, 10 Gbps NIC, 1 compute node, 32k block size, 50% read/50% write IO profile => 10.52k IOPS

GS-E02

GS-E03

GS-E04

 


 

Test 6: The same single compute node above

3 vLUNs:

All the same:

  • Protect Level: 1 (striped across 3 Gridstore nodes, fault tolerant to survive 1 node failure)
  • Optimized for: IOPS
  • QoS: Platinum
  • Unmasked: to 1 server
  • File system: NTFS
  • Block size: 32 KB
  • Size: 2 TB (3 segments,  1 TB each)

GS-F1

Result:

Testing with 24 vCores, 10 Gbps NIC, 1 compute node, 32k block size, 50% read/50% write IO profile => 9.94k IOPS

GS-F6

GS-F5

GS-F4

GS-F3

GS-F2


 Summary:

GS-004


Creating vLUNs on a Gridstore array


Gridstore provides an alternative to traditional enterprise storage. Basic facts about Gridstore storage technology include:

  • It provides storage nodes implemented as 1 RU servers that function collectively as a single storage array.
  • Connectivity between the nodes and the storage consumers/compute nodes occurs over one or more 1 or 10 Gbps Ethernet connections.
  • NIC teaming can be setup on the Gridstore nodes to provide additional bandwidth and fault tolerance
  • It utilizes a virtual controller to present storage to Windows servers

The following is an overview of available vLUN options and features. The lab used consists of 6x Gridstore “H” storage nodes. Gridstore storage nodes are of 2 types: H-nodes and C-nodes. C-nodes are capacity nodes and typically include 4x 3TB 7200 RPM SAS disks. H-nodes are hybrid nodes that include a 550 GB PCIe Flash card. Each node has:

  • CPU: 1x Xeon E5-2403 processor at 1.8 GHz with 4 cores (no hyper-threading) and 10 MB L3 cache
  • Memory: 32 GB DDR3 1333 MHz DIMM
  • Disks (not counting boot/system disk(s)): 4x 3TB 7200 RPM SAS disks and a 550 GB PCIe Flash card

To create a vLUN, in GridControl snap-in, vPools=>(vPool_Name)=>right-click on vLUNs and click Create vLUN
GS-v1
GridProtect level 0: This setting provides no protection against any disk loss in any storage node, or against any node loss in the grid. This option is strongly discouraged.
GS-v2

 

The next step is optional. It includes the selection of QoS (Bronze/Gold/Platinum), which compute node(s) to unmask this vLUN to, and how to format it.
GS-v3

If you skip this step, the GridStore software will create the vLUN but not unmask it to any host:
GS-v4

In this view note:

  1. vLUN protect level 0 is created on 4 storage nodes listed in the Hostname column (this is the node’s NetBios name)
  2. The “Disk” and “slot” columns show the actual disks on which this vLUN resides. The following view shows the same information under Storage Nodes:
    GS-v5
  3. vLUNs are thick-provisioned. vLUN entire space is dedicated/reserved on the disks.

To unmask the newly created vLUN and present it to a compute node, right-click on it, and click Add vLUN to Server:

GS-v6

Pick the desired server from the drop down list and select the desired QoS level (Bronze/Gold/Platinum):

GS-v7

The unmasked vLUN becomes visible to the selected compute node:

GS-v8

As any regular disk, we now can bring it online, initialize it (GPT recommended), and format it. I recommend using 32k block size not the 4k default. and naming the volume the same name as the vLUN for consistency. 32k block size enhances IOPS at the expense of potentially wasting disk space if the average file size is under 32k. Given that most workloads will have files larger than 32KB, I recommend using 32KB clock size.

GS-v9

 


Using Azure Storage with Powershell – Getting started


Az01

  • After installation, type in Azure in the search field, click the search icon, then click Install to see installed modules on top:

Az02

  • Close the Microsoft Web Platform Installer. To get back to it to add/remove modules you can use the icon:

Az04

  • Notice the new Azure set of icons added:

Az03

  • I’ve pinned the Azure Powershell icon to the task bar. Open it by right clicking on it, then right click on Windows Azure Powershell and click Run as administrator

Az05

  •  To connect to your Azure account, type in Add-AzureAccount

Az06

  • Azure Powershell displays a message similar to:

Az07

  • Check your subscription(s) using the command: Get-AzureSubscription

Az08

  • If you have more than one subscription under your Azure account, you may want to switch to a specific subscription. Use this command to switch a given subscription and make it the default subscription: Select-AzureSubscription -SubscriptionName “Visual Studio Premium with MSDN” -Default
    Substitute “Visual Studio Premium with MSDN” with SubscriptionName as shown from the Get-AzureSubscription command

Az09

If you don’t have a storage account setup under your Azure subscription, you can create one from your Azure portal. Click Storage on the left, then click New at the bottom:

AZ11

Type in a name for the new account you wish to create – must be lower case letter only. Pick an Azure data center – typically one that’s physically close to your location to get better latency. Pick a subscription. Pick a replication setting. Locally-redundant give you 3 copies of your data in the data center you selected. Geo-redundant gives you 3 additional copies in another Azure data center. Geo-redundant is typically twice the cost of locally-redundant storage account, and is the default option.

AZ10

In a minute or 2 Azure will finish creating the storage account. click on the account name:

AZ12

Next click Dashboard, and click Manage Access Keys at the bottom:

AZ131

Copy the account name and the primary access key. You will need them to use your storage account via Powershell later.

AZ14

Secure this information because it provides access to your Azure data. Data can be accessed by using either the primary or secondary keys. Each key is 88 characters long and is made up of alphanumeric upper and lower case letters and special characters. The availability of 2 keys allows us to change keys without losing access by applications or machines that use the account. For example in case of key compromise, and you’re using the primary key in an application or machine, you can:

  1. Regenerate the secondary key
  2. Replace the key in the script/application/machine using the storage account (no access interruption)
  3. Regenerate the primary key

Now you have changed your account keys without any service interruption


Upgrading Gridstore software – step by step


Gridstore provides an alternative to traditional enterprise storage. Basic facts about Gridstore storage technology include:

  • It provides storage nodes implemented as 1 RU servers that function collectively as a single storage array.
  • Connectivity between the nodes and the storage consumers/compute nodes occurs over one or more 1 or 10 Gbps Ethernet connections.
  • NIC teaming can be setup on the Gridstore nodes to provide additional bandwidth and fault tolerance
  • It utilizes a virtual controller to present storage to Windows servers

Gridstore releases software updates from time to time. The following is step-by-step overview of upgrading Gridstore software. This process upgrades the software on all Gridstore storage nodes as well as all management/compute nodes. All vLUNs must be stopped before upgrading the software on the storage nodes.

  1. Stop all vLUNs. In GridControl snap-in, vPools=>(vPool_Name)=>vLUNs=> right-click on each vLUN and click STOP
  2. You can view your current version in GridControl, Help=> About GridControl – this shows the software version on the local management/compute node
    GS-u2
  3. You can also view the software version on all storage nodes in GridControl under Storage Nodes
    GS-u4
  4. To kick off the process of upgrading the software on all storage nodes in the grid, in GridControl right-click on The Grid, and click Upgrade GridGS-u3
  5. Browse to the location of the Gridstore.msi file provided by Gridstore technical support and click next. GS-u5
  6. The installer goes about upgrading the software on each storage node in the grid
    GS-u6
  7. This went quickly, and completed the 6 nodes in this configuration in a matter of a few minutes
    GS-u7
  8. Next we need to upgrade each compute node that uses the Gridstore storage. This can be done centrally from any management node. In GridControl, click vController Manager. On the right side you will see the list of your storage-consumers/compute nodes
    GS-ua
  9. Right-click on each node that you need to upgrade and click Upgrade
    Note: This option is only available for nodes that are Online.

 


Storage Spaces lab disk IO benchmark


In the post titled Using Powershell with Tiered Mirrored Storage Spaces I outlined setting up tiered storage spaces in a lab environment. Here I benchmark this inexpensive Storage Spaces lab’s IO performance. Testing details are in this port.

Hardware used:

  • Server CPU: one Xeon E5-2620 at 2 GHz – it has 6 cores (hyperthreaded to 12 logical processors and 15 MB L3 cache)
  • Server RAM: 64 GB of 1333 MHz DDR3 DIMM memory
  • Disks (not counting boot/system disks) – SSD tier: 6x SAMSUNG 840 Pro Series MZ-7PD256BW 2.5″ 256GB SATA III MLC
  • Disks HDD tier: 2x WD BLACK SERIES WD4003FZEX 4TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5″ disks

Benchmark result:

x16b

 

I was pleasantly surprised to get 13.5K IOPS out of this setup. Here are the details: xhost16-hv1-32k-50rw

Pros: extremely inexpensive setup using commodity hardware, well-suited for testing/lab/R&D, ready to serve out block storage as iSCSI, or file storage as SMB/NFS via SOFS. Not to mention standard benefits of Storage Spaces including: manageable via Powershell, Software Defined Storage, has SMI-S WMI provider which makes it manageable from applications like VMM 2012 R2, and it can be intelligently monitored via SCOM 2012 R2.

Cons: not enterprise class, server is a single point of failure, not using 10/40/56 Gbps NICs, not using RDMA NICs


Benchmarking enterprise storage


I will be bench marking and testing different use cases for new emerging enterprise storage platforms. This post outlines the standardized benchmark testing that will be used.

Benchmark tool: Intel’s IOMeter version 2006.07.27

Settings:

  • Max disk size: 2,048,000 (8 GB iobw.tst file is generated at the root of the tested drive)
    x16c
  • Starting disk sector: 0
  • # of Outstanding I/Os: 32 (important)
  • IO profile: 32k; 50% read; (50% write)
    x16d
  • Run time: 10 minutes

Migrating IP settings from one NIC to another using Powershell


Here’s an example scenario where the following script may be particularly useful:

GridStore array where each node currently uses one 1 Gbps NIC. After adding 10 Gbps NIC to each node, we’d like to migrate the IP settings from the 1 Gbps NIC to the 10 Gbps NIC on each node. GridStore utilizes commodity rack mount servers and hardware and a robust software driver to present scalable, high performance, fully redundant vLUNs. More detailed posts on GridStore will follow.

This diagram shows network connectivity before adding the 10 Gbps NICs:

Before

 

After adding the 10 Gbps NICs:

After

Steps:

  1. You will need administrative credentials to the nodes from GridStore technical support
  2. From Server1, using GridControl snapin, stop all vLUNs:
    Stop-vLUNs
  3. RDP to each node.
    logon
  4. Currently nodes run Windows 7 Embedded and the RDP session will bring up a command prompt.
  5. Run Control to show Control Panel, double-click Network and Sharing Center, click Change Adapter Settings to view/confirm that you have 2 connected NICs:
    NICs
  6. Start Powershell ISE:
    start-ps-ise
  7. Copy/paste the following script and run it on each node:

    GS-2

# Script to move network configuration from one NIC to another on a GridStore node
# Sam Boutros
# 6/16/2014
# Works with Powershell 2.0
#
Set-Location “c:\support”
$Loc = Get-Location
$Date = Get-Date -format yyyymmdd_hhmmsstt
$logfile = $Loc.path + “\Move-GSNIC_” + $env:COMPUTERNAME + “_” + $Date + “.txt”
function log($string) {
Write-Host $string; $temp = “: ” + $string
$string = Get-Date -format “yyyy.mm.dd hh:mm:ss tt”; $string += $temp
$string | out-file -FilePath $logfile -append
}
#
log “Switching NIC configuration on $env:COMPUTERNAME”
$ConnectedNICs = Get-WmiObject Win32_NetworkAdapterConfiguration -Filter ‘IPEnabled=”true”‘
If ($ConnectedNICs.Count -lt 2) {log “Error: Less than 2 connected NICs:”; log $ConnectedNICs}
else {
if ($ConnectedNICs.Count -gt 2) {log “Error: More than 2 connected NICs:”; log $ConnectedNICs}
else { # 2 connected NICs
Stop-Service GridStoreManagementService
# Storing NICs details in variables for later use
$NIC0Index = $ConnectedNICs[0].index; log “NIC 0 Index: $NIC0Index”
$NIC0Desc = $ConnectedNICs[0].description; log “NIC 0 Description: $NIC0Desc”
$NIC0IPv4 = $ConnectedNICs[0].IPAddress[0]; log “NIC 0 IPv4: $NIC0IPv4”
$NIC0Mask = $ConnectedNICs[0].IPSubnet[0]; log “NIC 0 Subnet Mask: $NIC0Mask”
$Nic0 = Get-WmiObject win32_networkadapter -filter “DeviceId = $NIC0Index”
$NIC0ConnID = $Nic0.NetConnectionID; log “NIC 0 NetConnectionID: $NIC0ConnID”
#
$NIC1Index = $ConnectedNICs[1].index; log “NIC 1 Index: $NIC1Index”
$NIC1Desc = $ConnectedNICs[1].description; log “NIC 1 Description: $NIC1Desc”
$NIC1IPv4 = $ConnectedNICs[1].IPAddress[0]; log “NIC 1 IPv4: $NIC1IPv4”
$NIC1Mask = $ConnectedNICs[1].IPSubnet[0]; log “NIC 1 Subnet Mask: $NIC1Mask”
$Nic1 = Get-WmiObject win32_networkadapter -filter “DeviceId = $NIC1Index”
$NIC1ConnID = $Nic1.NetConnectionID; log “NIC 1 NetConnectionID: $NIC1ConnID”
# Identify Target NIC and Source NIC
if ($ConnectedNICs[0].IPAddress[0] -match “169.254”) {$TargetNIC = 0} else {$TargetNIC = 1}
$SourceNIC = 1 – $TargetNIC; log “Source NIC: NIC $SourceNIC”
$SourceIP = $ConnectedNICs[$SourceNIC].IPAddress[0]
$SourceMask = $ConnectedNICs[$SourceNIC].IPSubnet[0]
log “Source IP: $SourceIP”
# Setting IP address for Source NIC to DHCP
log “Changing IP address of source NIC to DHCP”
if ($ConnectedNICs[$SourceNIC].EnableDHCP().ReturnValue -eq 0) {
log “==> IP setting change was successful”} else {log “==> IP setting change failed”}
$ConnectedNICs[$SourceNIC].SetDNSServerSearchOrder()
# Need to disable and re-enable the Source NIC for the settings to take effect (!?)
$Nic0.Disable(); Start-Sleep -s 2
$Nic0.Enable(); Start-Sleep -s 2
# Setting IP address for Target NIC to the same values previousely held by the Source NIC
$TargetIP = $ConnectedNICs[$TargetNIC].IPAddress[0]
log “Target IP: $TargetIP”
log “Changing IP address of Target NIC to $SourceIP with subnet mask $SourceMask”
if ($ConnectedNICs[$TargetNIC].EnableStatic($SourceIP,$SourceMask).ReturnValue -eq 0) {
log “==> IP address change was successful”} else {log “==> IP address change failed”}
Remove-Item -Path “HKLM:\SOFTWARE\Wow6432Node\Gridstore\NetworkAdapter”
Start-Sleep -s 2
Start-Service GridStoreManagementService
}
}
Invoke-Expression “$env:windir\system32\Notepad.exe $logfile”

 


Using Powershell with Tiered Mirrored Storage Spaces


Windows Server 2012 R2 is full of new and enhanced features compared to Server 2008 R2. One of the new features is Storage Spaces. Basics of working with Storage Spaces:

  • Present raw disks to Windows via standard SAS controllers. No hardware RAID. Simply present JBOD to Windows to be used with Storage Spaces.
  • Boot/System disks must use traditional disks, not Storage Spaces. Typically use a pair of hardware mirrored disks for boot/system partitions.
  • The basic structure is: Storage pools contain physical disks, we create virtual disks within Storage Pools. A virtual disk can then be partitioned into volumes that can be formatted as a regular disk.
  • Initially all physical disks appear in the “primordial” pool. Newly added disks also appear in the primordial pool. Disks in the primordial pool are visible in the Computer Management => Disk Management tool and can be used directly.
  • Storage Spaces supports automatic tiering. Only 2 tiers are supported; typically SSD and HDD tiers. Tiering moves cold (less frequently accessed) data to the HDD tier, and hot (more frequently accessed) data to the SSD tier for better performance.
  • Tiering runs as a once-a-day scheduled task at 1 AM by default, and can be manually invoked.
  • When setting up tiered Storage Spaces, parity is not an option (can do simple or mirrored layout only). Also thin provisioning is not an option with tiering.
  • Storage Spaces supports thin and thick (fixed) provisioning.  Tiered Storage Spaces supports only thick (fixed) provisioning.
  • Storage Spaces supports write-back cache. The default is 1 GB for tiered vDisks, 32 MB for non-tiered vDisks, 100 GB maximum.
  • Recommended SSD to HDD ratio is 1:4
  • Storage Spaces supports 3 types of fault tolerance:
  1. Simple: this is like a stripe set with no parity: fastest but provides no fault tolerance
  2. Mirror: 2 way mirror requires minimum 2 disks disks and can survive a single disk failure. 3 way mirror requires minimum of 5 disks and can survive 2 simultaneous disk failures
  3. Parity: single parity requires minimum 3 disks and can survive a single disk failure. Dual parity requires minimum of 7 disks and can survive 2 simultaneous disk failures. Parity options are not available for tiered Storage Spaces.

Storage Spaces can be setup from the GUI: Server Manager => File and Storage Services => Volumes => Storage Pools.

css03

Powershell provides more control compared to the GUI when configuring Storage Spaces. For example, you can set the write-back cache size when using Powershell but not from the GUI.

The following script sets up Tiered Mirrored Storage Spaces. Here’s how the disk system looked like in Computer Management before the script:

css04

Here’s the script:

css05

# Script to create Storage Spaces pool, virtual disks, volumes using custom settings
# Assumes physical disks in the default primordial pool
# Creates Mirrored Tiered virtual disks – need even number of SSD and even number of HDD available disks
# Sam Boutros
# 6/22/2014
#
# In this example I have 6x 256GB SSD disks + 2x 4TB SAS physical disks (not counting boot/system disks of course)
# I’d like to end up with # 3 mirrored and tiered vDisks of equal size using the maximum available space, with 25 GB write-back cache
# Customize the following settings to meet your specific hardware configuration
$PoolName = “Pool1”
$WBCache = 25 # GB (Default is 1 GB for Tiered disks – 32 MB for non-tiered)
$TieredMirroredvDisks = @(“HyperV1″,”HyperV2″,”HyperV3”) # List names of mirrored-tiered vDisks you like to create
$DriveLetters = @(“I”,”J”,”K”) # List drive letters you like to assign to the new volumes
$BlockSize = 32 # KB
# End Data Entery section
#
$Loc = Get-Location
$Date = Get-Date -format yyyyMMdd_hhmmsstt
$logfile = $Loc.path + “\CreateSS_” + $Date + “.txt”
function log($string, $color)
{
if ($Color -eq $null) {$color = “white”}
write-host $string -foregroundcolor $color
$temp = “: ” + $string
$string = Get-Date -format “yyyy.MM.dd hh:mm:ss tt”
$string += $temp
$string | out-file -Filepath $logfile -append
}
#
# Create new Storage Pool
$StorageSpaces = Get-StorageSubSystem -FriendlyName *Spaces*
$PhysicalDisks = Get-PhysicalDisk -CanPool $true | Sort Size | FT DeviceId, FriendlyName, CanPool, Size, HealthStatus, MediaType -AutoSize -ErrorAction SilentlyContinue
Log “Available physical disks:” green
log ($PhysicalDisks | Out-String)
if (!$PhysicalDisks) {
log “Error: no physical disks are available in the primordial pool..stopping” yellow
break
}
$PhysicalDisks = Get-PhysicalDisk -CanPool $true -ErrorAction SilentlyContinue
# Count SSD and HDD disk count and sizes, some error detection
$SSDBytes=0; $HDDBytes=0
for ($i=0; $i -le $PhysicalDisks.Count; $i++) {
if ($PhysicalDisks[$i].MediaType -eq “SSD”) {$SSD++; $SSDBytes+=$PhysicalDisks[$i].Size}
if ($PhysicalDisks[$i].MediaType -eq “HDD”) {$HDD++; $HDDBytes+=$PhysicalDisks[$i].Size}
}
$Disks = $HDD + $SSD
if ( $Disks -lt 4) { log “Error: Only $Disks disks are available. Need minimum 4 disks for mirrored-tiered storage spaces..stopping” yellow; break }
if ( $SSD -lt 2) { log “Error: Only $SSD SSD disks are available. Need minimum 2 SSD disks for mirrored-tiered storage spaces..stopping” yellow; break }
if ( $HDD -lt 2) { log “Error: Only $HDD HDD disks are available. Need minimum 2 HDD disks for mirrored-tiered storage spaces..stopping” yellow; break }
if ( $SSD % 2 -eq 0) {} else { log “Error: Found $SSD SSD disk(s). Need even number of SSD disks for mirrored storage spaces..stopping” yellow; break }
if ( $HDD % 2 -eq 0) {} else { log “Error: Found $HDD HDD disk(s). Need even number of HDD disks for mirrored storage spaces..stopping” yellow; break }
# Create new pool
log “Creating new Storage Pool ‘$PoolName’:” green
$Status = New-StoragePool -FriendlyName $PoolName -StorageSubSystemFriendlyName $StorageSpaces.FriendlyName -PhysicalDisks $PhysicalDisks -ErrorAction SilentlyContinue
log ($Status | Out-String)
if ($Status.OperationalStatus -eq “OK”) {log “Storage Pool creation succeeded” green} else { log “Storage Pool creation failed..stopping” yellow; break }
# Configure resiliency settings
Get-StoragePool $PoolName |Set-ResiliencySetting -Name Mirror -NumberofColumnsDefault 1 -NumberOfDataCopiesDefault 2
# Configure two tiers
Get-StoragePool $PoolName | New-StorageTier –FriendlyName SSDTier –MediaType SSD
Get-StoragePool $PoolName | New-StorageTier –FriendlyName HDDTier –MediaType HDD
$SSDSpace = Get-StorageTier -FriendlyName SSDTier
$HDDSpace = Get-StorageTier -FriendlyName HDDTier
# Create tiered/mirrored vDisks
$BlockSizeKB = $BlockSize * 1024
$WBCacheGB = $WBCache * 1024 * 1024 * 1024 # GB
$SSDSize = $SSDBytes/($TieredMirroredvDisks.Count*2) – ($WBCacheGB + (2*1024*1024*1024))
$HDDSize = $HDDBytes/($TieredMirroredvDisks.Count*2) – ($WBCacheGB + (2*1024*1024*1024))
$temp = 0
ForEach ($vDisk in $TieredMirroredvDisks) {
log “Attempting to create vDisk ‘$vDisk’..”
$Status = Get-StoragePool $PoolName | New-VirtualDisk -FriendlyName $vDisk -ResiliencySettingName Mirror –StorageTiers $SSDSpace, $HDDSpace -StorageTierSizes $SSDSize,$HDDSize -WriteCacheSize $WBCacheGB
log ($Status | Out-String)
$DriveLetter = $DriveLetters[$temp]
if ($Status.OperationalStatus -eq “OK”) {
log “vDisk ‘$vDisk’ creation succeeded” green
log “Initializing disk ‘$vDisk’..”
$InitDisk = $Status | Initialize-Disk -PartitionStyle GPT -PassThru # Initialize disk
log ($InitDisk | Out-String)
log “Creating new partition on disk ‘$vDisk’, drive letter ‘$DriveLetter’..”
$Partition = $InitDisk | New-Partition -UseMaximumSize -DriveLetter $DriveLetter # Create new partition
log ($Partition | Out-String)
log “Formatting new partition as volume ‘$vDisk’, drive letter ‘$DriveLetter’, NTFS, $BlockSize KB block size..”
$Format = $Partition | Format-Volume -FileSystem NTFS -NewFileSystemLabel $vDisk -AllocationUnitSize $BlockSizeKB -Confirm:$false # Format new partition
log ($Format | Out-String)
} else { log “vDisk ‘$vDisk’ creation failed..stopping” yellow; break }
$temp++
}
Invoke-Expression “$env:windir\system32\Notepad.exe $logfile”

Here’s how vDisks look like after the script:

css07

And here’s how the disks look like in Computer Management => Disk Management

css06

For more information check this link.