Storage Spaces

Configuring Storage Spaces for Azure SQL IaaS

One way to use SQL in Azure is to deploy SQL IaaS by installing SQL on an Azure VM. This post goes over provisioning disk system in Azure optimized for SQL. In this demo we’re using a DS14v2 size VM, but any VM size that supports 16 data disks is OK. To see the VM SKU’s that support 16 or more disks in a given Azure location we can use the PowerShell cmdlet:

Get-AzureRmVMSize -Location 'eastus' | 
    where MaxDataDiskCount -GE 16 | sort MaxDataDiskCount

and we’ll see a list like

We deployed a VM using the Microsoft Gallery image of the latest 2016 server version, using an unmanaged OS disk in a storage account configured as:

  • Premium
  • LRS
  • GPv2
  • Hot

Attach 16 disks:

Next we provision and attach 16 one TB unmanaged SSD disks (page blobs) using the following PowerShell code:

$VMName = 'myVMName'
$RGName = 'myResourceGroupName'
0..15 | foreach {
    $VM = Get-AzureRmVM -ResourceGroupName $RGName -Name $VMName
    $DataDiskName = "$VMName-DataDisk-$_"
    $OSDiskUri    = $VM.StorageProfile.OsDisk.Vhd.Uri
    $DataDiskUri  = "$($OSDiskUri | Split-Path)\$DataDiskName.vhd".Replace('\','/')
    $ParameterSet = @{
        VM           = $VM 
        Name         = $DataDiskName 
        Caching      = 'ReadWrite' 
        DiskSizeInGB = 1023 
        Lun          = $_ 
        VhdUri       = $DataDiskUri 
        CreateOption = 'Empty'
    $VM = Add-AzureRmVMDataDisk @ParameterSet
    Update-AzureRmVM -ResourceGroupName $RGName -VM $VM

Create Storage Pool:

Next we RDP to the VM and provision a storage pool. In Server Manager under File and Storage Service/Volumes/Storage Pools we should see the 16 disks under the default ‘Primordial’ pool. We can right click on that and create new pool. The important thing here is to select ‘Manual’ disk allocation for each of the 16 disks. This is important since this is an all-SSD pool. The default setting will cause the system to reserve all 16 disks for Journal and we won’t be able to create any virtual disks.

The same task can be performed via PowerShell as follows:

$PoolName = 'SQLPool'
$ParameterSet = @{
    FriendlyName = $PoolName 
    StorageSubSystemFriendlyName = (Get-StorageSubSystem).FriendlyName 
    PhysicalDisks = Get-PhysicalDisk –CanPool $True
New-StoragePool @ParameterSet
Get-PhysicalDisk -FriendlyName 'Msft Virtual Disk' | 
    Set-PhysicalDisk -Usage ManualSelect # otherwise all SSD disks will be reserved for journal

Create Virtual disks and volumes:

Finally we create virtual disks and volumes as follows:

This particular configuration uses 8 out of the 16 TB in the pool leaving 8 TB for future virtual disk growth. A Virtual disk can be expanded in Server Manager, to be followed by volume expansion in Disk Management tool.

The virtual disks in this configuration can survive a single disk failure being in a 2-way mirror. Although this is almost not a practical concern given that the 16 disks are triple redundant (each block of each disk is synchronously written to 3 physical underlying disks)

2 way-mirrored virtual disks also enhance read performance since read operations occur against one of the 2 disks in a mirrored space.

In the data/log/temp virtual disks, the Interleave size has been dropped to 64KB down from the default 256KB since SQL writes are 8-32KB. With 8 columns, this makes the data stripe size (64 * 8) 512KB

Upgrading Server 2012 R2 to Server 2016 and Storage Spaces

Server 2016 has enhanced and added new features to Storage Spaces. Most notably is the introduction of Storage Spaces Direct, Storage Replica, and Storage QoS. This post explores upgrading a physical Server 2012 R2 that uses mirrored tiered storage space.

After installing Server 2016 (Desktop Experience), and choosing to keep ‘nothing’


In Server Manager, File and Storage Services\Volumes\Storage Pools, we see the old Storage Pool from the prior installation of Server 2012 R2


To recover the Storage Pool, its virtual disks, and all data follow these steps:

  1. Set Read-Write access server2016-19
  2. Upgrade the Storage Pool Version server2016-18Note that this step is irreversible
  3.  Right click on each virtual disk and attach it server2016-21
  4. Finally, in Disk Management, right click on each virtual disk and online it

The virtual disks retain the drive letters and volume labels assigned to them in the old 2012 R2 server. All data is intact.


Benchmarking Azure VM storage

Azure Standard tier virtual machines come with an optional number of persistent page blob disks, up to 1 terabyte each. These disks are expected to deliver 500 IOPS or 60 MB/s throughput eachVeeam-Azure05

Let’s see if we can make that happen.

I started by creating a new Standard Storage account. This is to make sure I’m not hitting the limitation of 20k IOPS/Standard Storage account during testing.

I created a new Standard A3 VM.


From my on-premises management machine, I used Powershell to create and attach 8x 1TB disks to the VM. To get started with Powershell for Azure see this post.

# Input
$SubscriptionName = '###Removed###' 
$StorageAccount = 'testdisks1storageaccount'
$VMName = 'testdisks1vm'
$ServiceName = 'testdisks1cloudservice'
$PwdFile = ".\$VMName-###Removed###.txt"
$AdminName = '###Removed###'
# Initialize
Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount
$objVM = Get-AzureVM -Name $VMName -ServiceName $ServiceName
$VMFQDN = (Get-AzureWinRMUri -ServiceName $ServiceName).Host
$Port = (Get-AzureWinRMUri -ServiceName $ServiceName).Port
# Create and attach 8x 1TB disks
0..7 | % { $_
  $objVM | Add-AzureDataDisk -CreateNew -DiskSizeInGB 1023 -DiskLabel "Disk$_" -LUN $_ | Update-AzureVM

Next I used Powershell remoting to create a Storage Space optimized for 1 MB writes. The intend is to use the VM with Veeam Cloud Connect. This particular workloads uses 1 MB write operations.

This code block connects to the Azure VM:

# Remote to the VM 
# Get certificate for Powershell remoting to the Azure VM if not installed already
Write-Verbose "Adding certificate 'CN=$VMFQDN' to 'LocalMachine\Root' certificate store.." 
$Thumbprint = (Get-AzureVM -ServiceName $ServiceName -Name $VMName | 
  select -ExpandProperty VM).DefaultWinRMCertificateThumbprint
$Temp = [IO.Path]::GetTempFileName()
(Get-AzureCertificate -ServiceName $ServiceName -Thumbprint $Thumbprint -ThumbprintAlgorithm sha1).Data | Out-File $Temp
$Cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 $Temp
$store = New-Object System.Security.Cryptography.X509Certificates.X509Store "Root","LocalMachine"
Remove-Item $Temp -Force -Confirm:$false
# Attempt to open Powershell session to Azure VM
Write-Verbose "Opening PS session with computer '$VMName'.." 
if (-not (Test-Path -Path $PwdFile)) { 
  Write-Verbose "Pwd file '$PwdFile' not found, prompting to pwd.."
  Read-Host "Enter the pwd for '$AdminName' on '$VMFQDN'" -AsSecureString | 
    ConvertFrom-SecureString | Out-File $PwdFile 
$Pwd = Get-Content $PwdFile | ConvertTo-SecureString 
$Cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $AdminName, $Pwd
$Session = New-PSSession -ComputerName $VMFQDN -Port $Port -UseSSL -Credential $Cred -ErrorAction Stop

Now I have an open PS session with the Azure VM and can execute commands and get output back.

Next I check/verify available disks on the VM:

$ScriptBlock = { Get-PhysicalDisk -CanPool $True }
$Result = Invoke-Command -Session $Session -ScriptBlock $ScriptBlock 
$Result | sort friendlyname | FT -a 

Then I create an 8-column simple storage space optimized for 1 MB stripe size:

$ScriptBlock = { 
  $PoolName = "VeeamPool3"
  $vDiskName = "VeeamVDisk3"
  $VolumeLabel = "VeeamRepo3"
  New-StoragePool -FriendlyName $PoolName -StorageSubsystemFriendlyName “Storage Spaces*” -PhysicalDisks (Get-PhysicalDisk -CanPool $True) |
    New-VirtualDisk -FriendlyName $vDiskName -UseMaximumSize -ProvisioningType Fixed -ResiliencySettingName Simple -NumberOfColumns 16 -Interleave 64KB |
      Initialize-Disk -PassThru -PartitionStyle GPT |
        New-Partition -AssignDriveLetter -UseMaximumSize | 
          Format-Volume -FileSystem NTFS -NewFileSystemLabel $VolumeLabel -AllocationUnitSize 64KB -Confirm:$false
  Get-VirtualDisk | select FriendlyName,Interleave,LogicalSectorSize,NumberOfColumns,PhysicalSectorSize,ProvisioningType,ResiliencySettingName,Size,WriteCacheSize
$Result = Invoke-Command -Session $Session -ScriptBlock $ScriptBlock 
$Result | FT -a 

Next I RDP to the VM, and run IOMeter using the following settings:

  • All 4 workers (CPU cores)
  • Maximum Disk Size 1,024,000 sectors (512 MB test file)
  • # of Outstanding IOs: 20 per target. This is make sure the target disk is getting plenty of requests during the test
  • Access specification: 64KB, 50% read, 50% write

I’m expecting to see 4K IOPS and 320 MB/s throughput. (8 disks x 500 IOPS or 60 MB/s each). But this is what I get:


I got about 1.3 K IOPS instead of 4K, and about 90 MB/s throughput instead of 480 MB/s.

This VM and storage account are in East US 2 region.

This is consistent with other people’s findings like this post.

I also tested using the same 8 disks as a striped disk in Windows. I removed the volume, vDisk, Storage Space, then provisioned a traditional RAID 0 striped disk in this Windows Server 2012 R2 VM. Results were slightly better:


This is still far off the expected 4k IOPS or 480 MB/s I should be seeing here.

I upgraded the VM to Standard A4 tier, and repeated the same tests:


Standard A4 VM can have a maximum of 16x 1TB persistent page blob disks. I used powershell to provision and attach 16 disks. Then create a storage space with 16 columns optimized for 1 MB stripe:


Then I benchmarked storage performance on drive e: using same exact IOMeter settings as above:


Results are proportionate to the Standard A3 VM test, but they still fall far short.

I’m seeing 2.7 K IOPS instead of the expected 8K IOPS, and about 175 MB/s throughput instead of the expected 960 MB/s

The IOmeter ‘Maximum I/O response Time’ is extremely high (26+ seconds). This has been a consistent finding in all Azure VM testing. This leads me to suspect that the disk requests are being throttled (possibly by the hypervisor).


Storage Spaces lab disk IO benchmark

In the post titled Using Powershell with Tiered Mirrored Storage Spaces I outlined setting up tiered storage spaces in a lab environment. Here I benchmark this inexpensive Storage Spaces lab’s IO performance. Testing details are in this port.

Hardware used:

  • Server CPU: one Xeon E5-2620 at 2 GHz – it has 6 cores (hyperthreaded to 12 logical processors and 15 MB L3 cache)
  • Server RAM: 64 GB of 1333 MHz DDR3 DIMM memory
  • Disks (not counting boot/system disks) – SSD tier: 6x SAMSUNG 840 Pro Series MZ-7PD256BW 2.5″ 256GB SATA III MLC
  • Disks HDD tier: 2x WD BLACK SERIES WD4003FZEX 4TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5″ disks

Benchmark result:



I was pleasantly surprised to get 13.5K IOPS out of this setup. Here are the details: xhost16-hv1-32k-50rw

Pros: extremely inexpensive setup using commodity hardware, well-suited for testing/lab/R&D, ready to serve out block storage as iSCSI, or file storage as SMB/NFS via SOFS. Not to mention standard benefits of Storage Spaces including: manageable via Powershell, Software Defined Storage, has SMI-S WMI provider which makes it manageable from applications like VMM 2012 R2, and it can be intelligently monitored via SCOM 2012 R2.

Cons: not enterprise class, server is a single point of failure, not using 10/40/56 Gbps NICs, not using RDMA NICs

Using Powershell with Tiered Mirrored Storage Spaces

Windows Server 2012 R2 is full of new and enhanced features compared to Server 2008 R2. One of the new features is Storage Spaces. Basics of working with Storage Spaces:

  • Present raw disks to Windows via standard SAS controllers. No hardware RAID. Simply present JBOD to Windows to be used with Storage Spaces.
  • Boot/System disks must use traditional disks, not Storage Spaces. Typically use a pair of hardware mirrored disks for boot/system partitions.
  • The basic structure is: Storage pools contain physical disks, we create virtual disks within Storage Pools. A virtual disk can then be partitioned into volumes that can be formatted as a regular disk.
  • Initially all physical disks appear in the “primordial” pool. Newly added disks also appear in the primordial pool. Disks in the primordial pool are visible in the Computer Management => Disk Management tool and can be used directly.
  • Storage Spaces supports automatic tiering. Only 2 tiers are supported; typically SSD and HDD tiers. Tiering moves cold (less frequently accessed) data to the HDD tier, and hot (more frequently accessed) data to the SSD tier for better performance.
  • Tiering runs as a once-a-day scheduled task at 1 AM by default, and can be manually invoked.
  • When setting up tiered Storage Spaces, parity is not an option (can do simple or mirrored layout only). Also thin provisioning is not an option with tiering.
  • Storage Spaces supports thin and thick (fixed) provisioning.  Tiered Storage Spaces supports only thick (fixed) provisioning.
  • Storage Spaces supports write-back cache. The default is 1 GB for tiered vDisks, 32 MB for non-tiered vDisks, 100 GB maximum.
  • Recommended SSD to HDD ratio is 1:4
  • Storage Spaces supports 3 types of fault tolerance:
  1. Simple: this is like a stripe set with no parity: fastest but provides no fault tolerance
  2. Mirror: 2 way mirror requires minimum 2 disks disks and can survive a single disk failure. 3 way mirror requires minimum of 5 disks and can survive 2 simultaneous disk failures
  3. Parity: single parity requires minimum 3 disks and can survive a single disk failure. Dual parity requires minimum of 7 disks and can survive 2 simultaneous disk failures. Parity options are not available for tiered Storage Spaces.

Storage Spaces can be setup from the GUI: Server Manager => File and Storage Services => Volumes => Storage Pools.


Powershell provides more control compared to the GUI when configuring Storage Spaces. For example, you can set the write-back cache size when using Powershell but not from the GUI.

The following script sets up Tiered Mirrored Storage Spaces. Here’s how the disk system looked like in Computer Management before the script:


Here’s the script:


# Script to create Storage Spaces pool, virtual disks, volumes using custom settings
# Assumes physical disks in the default primordial pool
# Creates Mirrored Tiered virtual disks – need even number of SSD and even number of HDD available disks
# Sam Boutros
# 6/22/2014
# In this example I have 6x 256GB SSD disks + 2x 4TB SAS physical disks (not counting boot/system disks of course)
# I’d like to end up with # 3 mirrored and tiered vDisks of equal size using the maximum available space, with 25 GB write-back cache
# Customize the following settings to meet your specific hardware configuration
$PoolName = “Pool1”
$WBCache = 25 # GB (Default is 1 GB for Tiered disks – 32 MB for non-tiered)
$TieredMirroredvDisks = @(“HyperV1″,”HyperV2″,”HyperV3”) # List names of mirrored-tiered vDisks you like to create
$DriveLetters = @(“I”,”J”,”K”) # List drive letters you like to assign to the new volumes
$BlockSize = 32 # KB
# End Data Entery section
$Loc = Get-Location
$Date = Get-Date -format yyyyMMdd_hhmmsstt
$logfile = $Loc.path + “\CreateSS_” + $Date + “.txt”
function log($string, $color)
if ($Color -eq $null) {$color = “white”}
write-host $string -foregroundcolor $color
$temp = “: ” + $string
$string = Get-Date -format “yyyy.MM.dd hh:mm:ss tt”
$string += $temp
$string | out-file -Filepath $logfile -append
# Create new Storage Pool
$StorageSpaces = Get-StorageSubSystem -FriendlyName *Spaces*
$PhysicalDisks = Get-PhysicalDisk -CanPool $true | Sort Size | FT DeviceId, FriendlyName, CanPool, Size, HealthStatus, MediaType -AutoSize -ErrorAction SilentlyContinue
Log “Available physical disks:” green
log ($PhysicalDisks | Out-String)
if (!$PhysicalDisks) {
log “Error: no physical disks are available in the primordial pool..stopping” yellow
$PhysicalDisks = Get-PhysicalDisk -CanPool $true -ErrorAction SilentlyContinue
# Count SSD and HDD disk count and sizes, some error detection
$SSDBytes=0; $HDDBytes=0
for ($i=0; $i -le $PhysicalDisks.Count; $i++) {
if ($PhysicalDisks[$i].MediaType -eq “SSD”) {$SSD++; $SSDBytes+=$PhysicalDisks[$i].Size}
if ($PhysicalDisks[$i].MediaType -eq “HDD”) {$HDD++; $HDDBytes+=$PhysicalDisks[$i].Size}
$Disks = $HDD + $SSD
if ( $Disks -lt 4) { log “Error: Only $Disks disks are available. Need minimum 4 disks for mirrored-tiered storage spaces..stopping” yellow; break }
if ( $SSD -lt 2) { log “Error: Only $SSD SSD disks are available. Need minimum 2 SSD disks for mirrored-tiered storage spaces..stopping” yellow; break }
if ( $HDD -lt 2) { log “Error: Only $HDD HDD disks are available. Need minimum 2 HDD disks for mirrored-tiered storage spaces..stopping” yellow; break }
if ( $SSD % 2 -eq 0) {} else { log “Error: Found $SSD SSD disk(s). Need even number of SSD disks for mirrored storage spaces..stopping” yellow; break }
if ( $HDD % 2 -eq 0) {} else { log “Error: Found $HDD HDD disk(s). Need even number of HDD disks for mirrored storage spaces..stopping” yellow; break }
# Create new pool
log “Creating new Storage Pool ‘$PoolName’:” green
$Status = New-StoragePool -FriendlyName $PoolName -StorageSubSystemFriendlyName $StorageSpaces.FriendlyName -PhysicalDisks $PhysicalDisks -ErrorAction SilentlyContinue
log ($Status | Out-String)
if ($Status.OperationalStatus -eq “OK”) {log “Storage Pool creation succeeded” green} else { log “Storage Pool creation failed..stopping” yellow; break }
# Configure resiliency settings
Get-StoragePool $PoolName |Set-ResiliencySetting -Name Mirror -NumberofColumnsDefault 1 -NumberOfDataCopiesDefault 2
# Configure two tiers
Get-StoragePool $PoolName | New-StorageTier –FriendlyName SSDTier –MediaType SSD
Get-StoragePool $PoolName | New-StorageTier –FriendlyName HDDTier –MediaType HDD
$SSDSpace = Get-StorageTier -FriendlyName SSDTier
$HDDSpace = Get-StorageTier -FriendlyName HDDTier
# Create tiered/mirrored vDisks
$BlockSizeKB = $BlockSize * 1024
$WBCacheGB = $WBCache * 1024 * 1024 * 1024 # GB
$SSDSize = $SSDBytes/($TieredMirroredvDisks.Count*2) – ($WBCacheGB + (2*1024*1024*1024))
$HDDSize = $HDDBytes/($TieredMirroredvDisks.Count*2) – ($WBCacheGB + (2*1024*1024*1024))
$temp = 0
ForEach ($vDisk in $TieredMirroredvDisks) {
log “Attempting to create vDisk ‘$vDisk’..”
$Status = Get-StoragePool $PoolName | New-VirtualDisk -FriendlyName $vDisk -ResiliencySettingName Mirror –StorageTiers $SSDSpace, $HDDSpace -StorageTierSizes $SSDSize,$HDDSize -WriteCacheSize $WBCacheGB
log ($Status | Out-String)
$DriveLetter = $DriveLetters[$temp]
if ($Status.OperationalStatus -eq “OK”) {
log “vDisk ‘$vDisk’ creation succeeded” green
log “Initializing disk ‘$vDisk’..”
$InitDisk = $Status | Initialize-Disk -PartitionStyle GPT -PassThru # Initialize disk
log ($InitDisk | Out-String)
log “Creating new partition on disk ‘$vDisk’, drive letter ‘$DriveLetter’..”
$Partition = $InitDisk | New-Partition -UseMaximumSize -DriveLetter $DriveLetter # Create new partition
log ($Partition | Out-String)
log “Formatting new partition as volume ‘$vDisk’, drive letter ‘$DriveLetter’, NTFS, $BlockSize KB block size..”
$Format = $Partition | Format-Volume -FileSystem NTFS -NewFileSystemLabel $vDisk -AllocationUnitSize $BlockSizeKB -Confirm:$false # Format new partition
log ($Format | Out-String)
} else { log “vDisk ‘$vDisk’ creation failed..stopping” yellow; break }
Invoke-Expression “$env:windir\system32\Notepad.exe $logfile”

Here’s how vDisks look like after the script:


And here’s how the disks look like in Computer Management => Disk Management


For more information check this link.