Benchmarking Azure VM storage


Azure Standard tier virtual machines come with an optional number of persistent page blob disks, up to 1 terabyte each. These disks are expected to deliver 500 IOPS or 60 MB/s throughput eachVeeam-Azure05

Let’s see if we can make that happen.

I started by creating a new Standard Storage account. This is to make sure I’m not hitting the limitation of 20k IOPS/Standard Storage account during testing.

I created a new Standard A3 VM.

Veeam-Azure06

From my on-premises management machine, I used Powershell to create and attach 8x 1TB disks to the VM. To get started with Powershell for Azure see this post.

# Input
$SubscriptionName = '###Removed###' 
$StorageAccount = 'testdisks1storageaccount'
$VMName = 'testdisks1vm'
$ServiceName = 'testdisks1cloudservice'
$PwdFile = ".\$VMName-###Removed###.txt"
$AdminName = '###Removed###'
# Initialize
Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount
$objVM = Get-AzureVM -Name $VMName -ServiceName $ServiceName
$VMFQDN = (Get-AzureWinRMUri -ServiceName $ServiceName).Host
$Port = (Get-AzureWinRMUri -ServiceName $ServiceName).Port
# Create and attach 8x 1TB disks
0..7 | % { $_
  $objVM | Add-AzureDataDisk -CreateNew -DiskSizeInGB 1023 -DiskLabel "Disk$_" -LUN $_ | Update-AzureVM
}

Next I used Powershell remoting to create a Storage Space optimized for 1 MB writes. The intend is to use the VM with Veeam Cloud Connect. This particular workloads uses 1 MB write operations.

This code block connects to the Azure VM:

# Remote to the VM 
# Get certificate for Powershell remoting to the Azure VM if not installed already
Write-Verbose "Adding certificate 'CN=$VMFQDN' to 'LocalMachine\Root' certificate store.." 
$Thumbprint = (Get-AzureVM -ServiceName $ServiceName -Name $VMName | 
  select -ExpandProperty VM).DefaultWinRMCertificateThumbprint
$Temp = [IO.Path]::GetTempFileName()
(Get-AzureCertificate -ServiceName $ServiceName -Thumbprint $Thumbprint -ThumbprintAlgorithm sha1).Data | Out-File $Temp
$Cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 $Temp
$store = New-Object System.Security.Cryptography.X509Certificates.X509Store "Root","LocalMachine"
$store.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite)
$store.Add($Cert)
$store.Close()
Remove-Item $Temp -Force -Confirm:$false
# Attempt to open Powershell session to Azure VM
Write-Verbose "Opening PS session with computer '$VMName'.." 
if (-not (Test-Path -Path $PwdFile)) { 
  Write-Verbose "Pwd file '$PwdFile' not found, prompting to pwd.."
  Read-Host "Enter the pwd for '$AdminName' on '$VMFQDN'" -AsSecureString | 
    ConvertFrom-SecureString | Out-File $PwdFile 
}
$Pwd = Get-Content $PwdFile | ConvertTo-SecureString 
$Cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $AdminName, $Pwd
$Session = New-PSSession -ComputerName $VMFQDN -Port $Port -UseSSL -Credential $Cred -ErrorAction Stop

Now I have an open PS session with the Azure VM and can execute commands and get output back.

Next I check/verify available disks on the VM:

$ScriptBlock = { Get-PhysicalDisk -CanPool $True }
$Result = Invoke-Command -Session $Session -ScriptBlock $ScriptBlock 
$Result | sort friendlyname | FT -a 

Then I create an 8-column simple storage space optimized for 1 MB stripe size:

$ScriptBlock = { 
  $PoolName = "VeeamPool3"
  $vDiskName = "VeeamVDisk3"
  $VolumeLabel = "VeeamRepo3"
  New-StoragePool -FriendlyName $PoolName -StorageSubsystemFriendlyName “Storage Spaces*” -PhysicalDisks (Get-PhysicalDisk -CanPool $True) |
    New-VirtualDisk -FriendlyName $vDiskName -UseMaximumSize -ProvisioningType Fixed -ResiliencySettingName Simple -NumberOfColumns 16 -Interleave 64KB |
      Initialize-Disk -PassThru -PartitionStyle GPT |
        New-Partition -AssignDriveLetter -UseMaximumSize | 
          Format-Volume -FileSystem NTFS -NewFileSystemLabel $VolumeLabel -AllocationUnitSize 64KB -Confirm:$false
  Get-VirtualDisk | select FriendlyName,Interleave,LogicalSectorSize,NumberOfColumns,PhysicalSectorSize,ProvisioningType,ResiliencySettingName,Size,WriteCacheSize
}
$Result = Invoke-Command -Session $Session -ScriptBlock $ScriptBlock 
$Result | FT -a 

Next I RDP to the VM, and run IOMeter using the following settings:

  • All 4 workers (CPU cores)
  • Maximum Disk Size 1,024,000 sectors (512 MB test file)
  • # of Outstanding IOs: 20 per target. This is make sure the target disk is getting plenty of requests during the test
  • Access specification: 64KB, 50% read, 50% write

I’m expecting to see 4K IOPS and 320 MB/s throughput. (8 disks x 500 IOPS or 60 MB/s each). But this is what I get:

Veeam-Azure02

I got about 1.3 K IOPS instead of 4K, and about 90 MB/s throughput instead of 480 MB/s.

This VM and storage account are in East US 2 region.

This is consistent with other people’s findings like this post.

I also tested using the same 8 disks as a striped disk in Windows. I removed the volume, vDisk, Storage Space, then provisioned a traditional RAID 0 striped disk in this Windows Server 2012 R2 VM. Results were slightly better:

Veeam-Azure03-Striped-Disk8

This is still far off the expected 4k IOPS or 480 MB/s I should be seeing here.


I upgraded the VM to Standard A4 tier, and repeated the same tests:

Veeam-Azure04

Standard A4 VM can have a maximum of 16x 1TB persistent page blob disks. I used powershell to provision and attach 16 disks. Then create a storage space with 16 columns optimized for 1 MB stripe:

Veeam-Azure07

Then I benchmarked storage performance on drive e: using same exact IOMeter settings as above:

Veeam-Azure08

Results are proportionate to the Standard A3 VM test, but they still fall far short.

I’m seeing 2.7 K IOPS instead of the expected 8K IOPS, and about 175 MB/s throughput instead of the expected 960 MB/s


The IOmeter ‘Maximum I/O response Time’ is extremely high (26+ seconds). This has been a consistent finding in all Azure VM testing. This leads me to suspect that the disk requests are being throttled (possibly by the hypervisor).

 

Advertisements

One response

  1. Sean

    Great write up, would be interesting to see the same test run with ReFS as this seems to be the default file system for SQL servers in Azure.

    Thanks

    March 4, 2015 at 7:12 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s